WorldWideScience

Sample records for reducing systematic errors

  1. Reducing systematic errors in measurements made by a SQUID magnetometer

    International Nuclear Information System (INIS)

    Kiss, L.F.; Kaptás, D.; Balogh, J.

    2014-01-01

    A simple method is described which reduces those systematic errors of a superconducting quantum interference device (SQUID) magnetometer that arise from possible radial displacements of the sample in the second-order gradiometer superconducting pickup coil. By rotating the sample rod (and hence the sample) around its axis into a position where the best fit is obtained to the output voltage of the SQUID as the sample is moved through the pickup coil, the accuracy of measuring magnetic moments can be increased significantly. In the cases of an examined Co 1.9 Fe 1.1 Si Heusler alloy, pure iron and nickel samples, the accuracy could be increased over the value given in the specification of the device. The suggested method is only meaningful if the measurement uncertainty is dominated by systematic errors – radial displacement in particular – and not by instrumental or environmental noise. - Highlights: • A simple method is described which reduces systematic errors of a SQUID. • The errors arise from a radial displacement of the sample in the gradiometer coil. • The procedure is to rotate the sample rod (with the sample) around its axis. • The best fit to the SQUID voltage has to be attained moving the sample through the coil. • The accuracy of measuring magnetic moment can be increased significantly

  2. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  3. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  4. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  5. Strategies to reduce the systematic error due to tumor and rectum motion in radiotherapy of prostate cancer

    International Nuclear Information System (INIS)

    Hoogeman, Mischa S.; Herk, Marcel van; Bois, Josien de; Lebesque, Joos V.

    2005-01-01

    Background and purpose: The goal of this work is to develop and evaluate strategies to reduce the uncertainty in the prostate position and rectum shape that arises in the preparation stage of the radiation treatment of prostate cancer. Patients and methods: Nineteen prostate cancer patients, who were treated with 3-dimensional conformal radiotherapy, received each a planning CT scan and 8-13 repeat CT scans during the treatment period. We quantified prostate motion relative to the pelvic bone by first matching the repeat CT scans on the planning CT scan using the bony anatomy. Subsequently, each contoured prostate, including seminal vesicles, was matched on the prostate in the planning CT scan to obtain the translations and rotations. The variation in prostate position was determined in terms of the systematic, random and group mean error. We tested the performance of two correction strategies to reduce the systematic error due to prostate motion. The first strategy, the pre-treatment strategy, used only the initial rectum volume in the planning CT scan to adjust the angle of the prostate with respect to the left-right (LR) axis and the shape and position of the rectum. The second strategy, the adaptive strategy, used the data of repeat CT scans to improve the estimate of the prostate position and rectum shape during the treatment. Results: The largest component of prostate motion was a rotation around the LR axis. The systematic error (1 SD) was 5.1 deg and the random error was 3.6 deg (1 SD). The average LR-axis rotation between the planning and the repeat CT scans correlated significantly with the rectum volume in the planning CT scan (r=0.86, P<0.0001). Correction of the rotational position on the basis of the planning rectum volume alone reduced the systematic error by 28%. A correction, based on the data of the planning CT scan and 4 repeat CT scans reduced the systematic error over the complete treatment period by a factor of 2. When the correction was

  6. Random and Systematic Errors Share in Total Error of Probes for CNC Machine Tools

    Directory of Open Access Journals (Sweden)

    Adam Wozniak

    2018-03-01

    Full Text Available Probes for CNC machine tools, as every measurement device, have accuracy limited by random errors and by systematic errors. Random errors of these probes are described by a parameter called unidirectional repeatability. Manufacturers of probes for CNC machine tools usually specify only this parameter, while parameters describing systematic errors of the probes, such as pre-travel variation or triggering radius variation, are used rarely. Systematic errors of the probes, linked to the differences in pre-travel values for different measurement directions, can be corrected or compensated, but it is not a widely used procedure. In this paper, the share of systematic errors and random errors in total error of exemplary probes are determined. In the case of simple, kinematic probes, systematic errors are much greater than random errors, so compensation would significantly reduce the probing error. Moreover, it shows that in the case of kinematic probes commonly specified unidirectional repeatability is significantly better than 2D performance. However, in the case of more precise strain-gauge probe systematic errors are of the same order as random errors, which means that errors correction or compensation, in this case, would not yield any significant benefits.

  7. Evaluation of Data with Systematic Errors

    International Nuclear Information System (INIS)

    Froehner, F. H.

    2003-01-01

    Application-oriented evaluated nuclear data libraries such as ENDF and JEFF contain not only recommended values but also uncertainty information in the form of 'covariance' or 'error files'. These can neither be constructed nor utilized properly without a thorough understanding of uncertainties and correlations. It is shown how incomplete information about errors is described by multivariate probability distributions or, more summarily, by covariance matrices, and how correlations are caused by incompletely known common errors. Parameter estimation for the practically most important case of the Gaussian distribution with common errors is developed in close analogy to the more familiar case without. The formalism shows that, contrary to widespread belief, common ('systematic') and uncorrelated ('random' or 'statistical') errors are to be added in quadrature. It also shows explicitly that repetition of a measurement reduces mainly the statistical uncertainties but not the systematic ones. While statistical uncertainties are readily estimated from the scatter of repeatedly measured data, systematic uncertainties can only be inferred from prior information about common errors and their propagation. The optimal way to handle error-affected auxiliary quantities ('nuisance parameters') in data fitting and parameter estimation is to adjust them on the same footing as the parameters of interest and to integrate (marginalize) them out of the joint posterior distribution afterward

  8. Physical predictions from lattice QCD. Reducing systematic errors

    International Nuclear Information System (INIS)

    Pittori, C.

    1994-01-01

    Some recent developments in the theoretical understanding of lattice quantum chromodynamics and of its possible sources of systematic errors are reported, and a review of some of the latest Monte Carlo results for light quarks phenomenology is presented. A very general introduction on a quantum field theory on a discrete spacetime lattice is given, and the Monte Carlo methods which allow to compute many interesting physical quantities in the non-perturbative domain of strong interactions, is illustrated. (author). 17 refs., 3 figs., 3 tabs

  9. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Systematic errors of EIT systems determined by easily-scalable resistive phantoms.

    Science.gov (United States)

    Hahn, G; Just, A; Dittmar, J; Hellige, G

    2008-06-01

    We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design.

  11. Systematic errors of EIT systems determined by easily-scalable resistive phantoms

    International Nuclear Information System (INIS)

    Hahn, G; Just, A; Dittmar, J; Hellige, G

    2008-01-01

    We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design

  12. SYSTEMATIC ERROR REDUCTION: NON-TILTED REFERENCE BEAM METHOD FOR LONG TRACE PROFILER

    International Nuclear Information System (INIS)

    QIAN, S.; QIAN, K.; HONG, Y.; SENG, L.; HO, T.; TAKACS, P.

    2007-01-01

    Systematic error in the Long Trace Profiler (LTP) has become the major error source as measurement accuracy enters the nanoradian and nanometer regime. Great efforts have been made to reduce the systematic error at a number of synchrotron radiation laboratories around the world. Generally, the LTP reference beam has to be tilted away from the optical axis in order to avoid fringe overlap between the sample and reference beams. However, a tilted reference beam will result in considerable systematic error due to optical system imperfections, which is difficult to correct. Six methods of implementing a non-tilted reference beam in the LTP are introduced: (1) application of an external precision angle device to measure and remove slide pitch error without a reference beam, (2) independent slide pitch test by use of not tilted reference beam, (3) non-tilted reference test combined with tilted sample, (4) penta-prism scanning mode without a reference beam correction, (5) non-tilted reference using a second optical head, and (6) alternate switching of data acquisition between the sample and reference beams. With a non-tilted reference method, the measurement accuracy can be improved significantly. Some measurement results are presented. Systematic error in the sample beam arm is not addressed in this paper and should be treated separately

  13. Systematic sampling with errors in sample locations

    DEFF Research Database (Denmark)

    Ziegel, Johanna; Baddeley, Adrian; Dorph-Petersen, Karl-Anton

    2010-01-01

    analysis using point process methods. We then analyze three different models for the error process, calculate exact expressions for the variances, and derive asymptotic variances. Errors in the placement of sample points can lead to substantial inflation of the variance, dampening of zitterbewegung......Systematic sampling of points in continuous space is widely used in microscopy and spatial surveys. Classical theory provides asymptotic expressions for the variance of estimators based on systematic sampling as the grid spacing decreases. However, the classical theory assumes that the sample grid...... is exactly periodic; real physical sampling procedures may introduce errors in the placement of the sample points. This paper studies the effect of errors in sample positioning on the variance of estimators in the case of one-dimensional systematic sampling. First we sketch a general approach to variance...

  14. Benefits and risks of using smart pumps to reduce medication error rates: a systematic review.

    Science.gov (United States)

    Ohashi, Kumiko; Dalleur, Olivia; Dykes, Patricia C; Bates, David W

    2014-12-01

    Smart infusion pumps have been introduced to prevent medication errors and have been widely adopted nationally in the USA, though they are not always used in Europe or other regions. Despite widespread usage of smart pumps, intravenous medication errors have not been fully eliminated. Through a systematic review of recent studies and reports regarding smart pump implementation and use, we aimed to identify the impact of smart pumps on error reduction and on the complex process of medication administration, and strategies to maximize the benefits of smart pumps. The medical literature related to the effects of smart pumps for improving patient safety was searched in PUBMED, EMBASE, and the Cochrane Central Register of Controlled Trials (CENTRAL) (2000-2014) and relevant papers were selected by two researchers. After the literature search, 231 papers were identified and the full texts of 138 articles were assessed for eligibility. Of these, 22 were included after removal of papers that did not meet the inclusion criteria. We assessed both the benefits and negative effects of smart pumps from these studies. One of the benefits of using smart pumps was intercepting errors such as the wrong rate, wrong dose, and pump setting errors. Other benefits include reduction of adverse drug event rates, practice improvements, and cost effectiveness. Meanwhile, the current issues or negative effects related to using smart pumps were lower compliance rates of using smart pumps, the overriding of soft alerts, non-intercepted errors, or the possibility of using the wrong drug library. The literature suggests that smart pumps reduce but do not eliminate programming errors. Although the hard limits of a drug library play a main role in intercepting medication errors, soft limits were still not as effective as hard limits because of high override rates. Compliance in using smart pumps is key towards effectively preventing errors. Opportunities for improvement include upgrading drug

  15. Tropical systematic and random error energetics based on NCEP ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    Systematic error growth rate peak is observed at wavenumber 2 up to 4-day forecast then .... the influence of summer systematic error and ran- ... total exchange. When the error energy budgets are examined in spectral domain, one may ask ques- tions on the error growth at a certain wavenum- ber from its interaction with ...

  16. Systematic errors in VLF direction-finding of whistler ducts

    International Nuclear Information System (INIS)

    Strangeways, H.J.; Rycroft, M.J.

    1980-01-01

    In the previous paper it was shown that the systematic error in the azimuthal bearing due to multipath propagation and incident wave polarisation (when this also constitutes an error) was given by only three different forms for all VLF direction-finders currently used to investigate the position of whistler ducts. In this paper the magnitude of this error is investigated for different ionospheric and ground parameters for these three different systematic error types. By incorporating an ionosphere for which the refractive index is given by the full Appleton-Hartree formula, the variation of the systematic error with ionospheric electron density and latitude and direction of propagation is investigated in addition to the variation with wave frequency, ground conductivity and dielectric constant and distance of propagation. The systematic bearing error is also investigated for the three methods when the azimuthal bearing is averaged over a 2 kHz bandwidth. This is found to lead to a significantly smaller bearing error which, for the crossed-loops goniometer, approximates the bearing error calculated when phase-dependent terms in the receiver response are ignored. (author)

  17. On systematic and statistic errors in radionuclide mass activity estimation procedure

    International Nuclear Information System (INIS)

    Smelcerovic, M.; Djuric, G.; Popovic, D.

    1989-01-01

    One of the most important requirements during nuclear accidents is the fast estimation of the mass activity of the radionuclides that suddenly and without control reach the environment. The paper points to systematic errors in the procedures of sampling, sample preparation and measurement itself, that in high degree contribute to total mass activity evaluation error. Statistic errors in gamma spectrometry as well as in total mass alpha and beta activity evaluation are also discussed. Beside, some of the possible sources of errors in the partial mass activity evaluation for some of the radionuclides are presented. The contribution of the errors in the total mass activity evaluation error is estimated and procedures that could possibly reduce it are discussed (author)

  18. The systematic and random errors determination using realtime 3D surface tracking system in breast cancer

    International Nuclear Information System (INIS)

    Kanphet, J; Suriyapee, S; Sanghangthum, T; Kumkhwao, J; Wisetrintong, M; Dumrongkijudom, N

    2016-01-01

    The purpose of this study to determine the patient setup uncertainties in deep inspiration breath-hold (DIBH) radiation therapy for left breast cancer patients using real-time 3D surface tracking system. The six breast cancer patients treated by 6 MV photon beams from TrueBeam linear accelerator were selected. The patient setup errors and motion during treatment were observed and calculated for interfraction and intrafraction motions. The systematic and random errors were calculated in vertical, longitudinal and lateral directions. From 180 images tracking before and during treatment, the maximum systematic error of interfraction and intrafraction motions were 0.56 mm and 0.23 mm, the maximum random error of interfraction and intrafraction motions were 1.18 mm and 0.53 mm, respectively. The interfraction was more pronounce than the intrafraction, while the systematic error was less impact than random error. In conclusion the intrafraction motion error from patient setup uncertainty is about half of interfraction motion error, which is less impact due to the stability in organ movement from DIBH. The systematic reproducibility is also half of random error because of the high efficiency of modern linac machine that can reduce the systematic uncertainty effectively, while the random errors is uncontrollable. (paper)

  19. Investigation of systematic errors of metastable "atomic pair" number

    CERN Document Server

    Yazkov, V

    2015-01-01

    Sources of systematic errors in analysis of data, collected in 2012, are analysed. Esti- mations of systematic errors in a number of “atomic pairs” fr om metastable π + π − atoms are presented.

  20. The use of adaptive radiation therapy to reduce setup error: a prospective clinical study

    International Nuclear Information System (INIS)

    Yan Di; Wong, John; Vicini, Frank; Robertson, John; Horwitz, Eric; Brabbins, Donald; Cook, Carla; Gustafson, Gary; Stromberg, Jannifer; Martinez, Alvaro

    1996-01-01

    , eight patients had completed the study. Their mean systematic setup error was 4 mm with a range of 2 mm to 6 mm before adjustment; and was reduced to 0.8 mm with a range of 0.2 mm to 1.8 mm after adjustments. There was no significant difference in their random setup errors before and after adjustment. Analysis of the block overlap distributions shows that the fractions of the prescribed field areas covered by the daily treatment increased after setup adjustment. The block overlap distributions also show that the magnitude of random setup errors at different field edges were different; 50% of which were small enough to allow the treatment margin to be reduced to 4 mm or less. Results from the on-going treatments of the remaining 12 patients show similar trends and magnitudes, and are not expected to be different. Conclusion: Our prospective study demonstrates that the ART process provides an effective and reliable approach to compensate for the systematic setup error of the individual patient. Adjusting the MLC field allows accurate setup adjustment as small as 2 mm, minimizes the possibility of 'unsettling' the patient and reduces the work load of the therapists. The ART process can be extended to correct for random setup errors by further modification of the MLC field shape and prescribed dose. Most importantly, ART integrates the use of advanced technologies to maximize treatment benefits, and can be important in the implementation of dose escalated conformal therapy

  1. Random and systematic beam modulator errors in dynamic intensity modulated radiotherapy

    International Nuclear Information System (INIS)

    Parsai, Homayon; Cho, Paul S; Phillips, Mark H; Giansiracusa, Robert S; Axen, David

    2003-01-01

    This paper reports on the dosimetric effects of random and systematic modulator errors in delivery of dynamic intensity modulated beams. A sliding-widow type delivery that utilizes a combination of multileaf collimators (MLCs) and backup diaphragms was examined. Gaussian functions with standard deviations ranging from 0.5 to 1.5 mm were used to simulate random positioning errors. A clinical example involving a clival meningioma was chosen with optic chiasm and brain stem as limiting critical structures in the vicinity of the tumour. Dose calculations for different modulator fluctuations were performed, and a quantitative analysis was carried out based on cumulative and differential dose volume histograms for the gross target volume and surrounding critical structures. The study indicated that random modulator errors have a strong tendency to reduce minimum target dose and homogeneity. Furthermore, it was shown that random perturbation of both MLCs and backup diaphragms in the order of σ = 1 mm can lead to 5% errors in prescribed dose. In comparison, when MLCs or backup diaphragms alone was perturbed, the system was more robust and modulator errors of at least σ = 1.5 mm were required to cause dose discrepancies greater than 5%. For systematic perturbation, even errors in the order of ±0.5 mm were shown to result in significant dosimetric deviations

  2. Systematic errors due to linear congruential random-number generators with the Swendsen-Wang algorithm: a warning.

    Science.gov (United States)

    Ossola, Giovanni; Sokal, Alan D

    2004-08-01

    We show that linear congruential pseudo-random-number generators can cause systematic errors in Monte Carlo simulations using the Swendsen-Wang algorithm, if the lattice size is a multiple of a very large power of 2 and one random number is used per bond. These systematic errors arise from correlations within a single bond-update half-sweep. The errors can be eliminated (or at least radically reduced) by updating the bonds in a random order or in an aperiodic manner. It also helps to use a generator of large modulus (e.g., 60 or more bits).

  3. Systematic Error Study for ALICE charged-jet v2 Measurement

    Energy Technology Data Exchange (ETDEWEB)

    Heinz, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Soltz, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-07-18

    We study the treatment of systematic errors in the determination of v2 for charged jets in √ sNN = 2:76 TeV Pb-Pb collisions by the ALICE Collaboration. Working with the reported values and errors for the 0-5% centrality data we evaluate the Χ2 according to the formulas given for the statistical and systematic errors, where the latter are separated into correlated and shape contributions. We reproduce both the Χ2 and p-values relative to a null (zero) result. We then re-cast the systematic errors into an equivalent co-variance matrix and obtain identical results, demonstrating that the two methods are equivalent.

  4. Assessment of the uncertainty associated with systematic errors in digital instruments: an experimental study on offset errors

    International Nuclear Information System (INIS)

    Attivissimo, F; Giaquinto, N; Savino, M; Cataldo, A

    2012-01-01

    This paper deals with the assessment of the uncertainty due to systematic errors, particularly in A/D conversion-based instruments. The problem of defining and assessing systematic errors is briefly discussed, and the conceptual scheme of gauge repeatability and reproducibility is adopted. A practical example regarding the evaluation of the uncertainty caused by the systematic offset error is presented. The experimental results, obtained under various ambient conditions, show that modelling the variability of systematic errors is more problematic than suggested by the ISO 5725 norm. Additionally, the paper demonstrates the substantial difference between the type B uncertainty evaluation, obtained via the maximum entropy principle applied to manufacturer's specifications, and the type A (experimental) uncertainty evaluation, which reflects actually observable reality. Although it is reasonable to assume a uniform distribution of the offset error, experiments demonstrate that the distribution is not centred and that a correction must be applied. In such a context, this work motivates a more pragmatic and experimental approach to uncertainty, with respect to the directions of supplement 1 of GUM. (paper)

  5. Study of systematic errors in the luminosity measurement

    International Nuclear Information System (INIS)

    Arima, Tatsumi

    1993-01-01

    The experimental systematic error in the barrel region was estimated to be 0.44 %. This value is derived considering the systematic uncertainties from the dominant sources but does not include uncertainties which are being studied. In the end cap region, the study of shower behavior and clustering effect is under way in order to determine the angular resolution at the low angle edge of the Liquid Argon Calorimeter. We also expect that the systematic error in this region will be less than 1 %. The technical precision of theoretical uncertainty is better than 0.1 % comparing the Tobimatsu-Shimizu program and BABAMC modified by ALEPH. To estimate the physical uncertainty we will use the ALIBABA [9] which includes O(α 2 ) QED correction in leading-log approximation. (J.P.N.)

  6. Study of systematic errors in the luminosity measurement

    Energy Technology Data Exchange (ETDEWEB)

    Arima, Tatsumi [Tsukuba Univ., Ibaraki (Japan). Inst. of Applied Physics

    1993-04-01

    The experimental systematic error in the barrel region was estimated to be 0.44 %. This value is derived considering the systematic uncertainties from the dominant sources but does not include uncertainties which are being studied. In the end cap region, the study of shower behavior and clustering effect is under way in order to determine the angular resolution at the low angle edge of the Liquid Argon Calorimeter. We also expect that the systematic error in this region will be less than 1 %. The technical precision of theoretical uncertainty is better than 0.1 % comparing the Tobimatsu-Shimizu program and BABAMC modified by ALEPH. To estimate the physical uncertainty we will use the ALIBABA [9] which includes O({alpha}{sup 2}) QED correction in leading-log approximation. (J.P.N.).

  7. Electronic portal image assisted reduction of systematic set-up errors in head and neck irradiation

    International Nuclear Information System (INIS)

    Boer, Hans C.J. de; Soernsen de Koste, John R. van; Creutzberg, Carien L.; Visser, Andries G.; Levendag, Peter C.; Heijmen, Ben J.M.

    2001-01-01

    Purpose: To quantify systematic and random patient set-up errors in head and neck irradiation and to investigate the impact of an off-line correction protocol on the systematic errors. Material and methods: Electronic portal images were obtained for 31 patients treated for primary supra-glottic larynx carcinoma who were immobilised using a polyvinyl chloride cast. The observed patient set-up errors were input to the shrinking action level (SAL) off-line decision protocol and appropriate set-up corrections were applied. To assess the impact of the protocol, the positioning accuracy without application of set-up corrections was reconstructed. Results: The set-up errors obtained without set-up corrections (1 standard deviation (SD)=1.5-2 mm for random and systematic errors) were comparable to those reported in other studies on similar fixation devices. On an average, six fractions per patient were imaged and the set-up of half the patients was changed due to the decision protocol. Most changes were detected during weekly check measurements, not during the first days of treatment. The application of the SAL protocol reduced the width of the distribution of systematic errors to 1 mm (1 SD), as expected from simulations. A retrospective analysis showed that this accuracy should be attainable with only two measurements per patient using a different off-line correction protocol, which does not apply action levels. Conclusions: Off-line verification protocols can be particularly effective in head and neck patients due to the smallness of the random set-up errors. The excellent set-up reproducibility that can be achieved with such protocols enables accurate dose delivery in conformal treatments

  8. Error evaluation method for material accountancy measurement. Evaluation of random and systematic errors based on material accountancy data

    International Nuclear Information System (INIS)

    Nidaira, Kazuo

    2008-01-01

    International Target Values (ITV) shows random and systematic measurement uncertainty components as a reference for routinely achievable measurement quality in the accountancy measurement. The measurement uncertainty, called error henceforth, needs to be periodically evaluated and checked against ITV for consistency as the error varies according to measurement methods, instruments, operators, certified reference samples, frequency of calibration, and so on. In the paper an error evaluation method was developed with focuses on (1) Specifying clearly error calculation model, (2) Getting always positive random and systematic error variances, (3) Obtaining probability density distribution of an error variance and (4) Confirming the evaluation method by simulation. In addition the method was demonstrated by applying real data. (author)

  9. Systematic Errors in Dimensional X-ray Computed Tomography

    DEFF Research Database (Denmark)

    that it is possible to compensate them. In dimensional X-ray computed tomography (CT), many physical quantities influence the final result. However, it is important to know which factors in CT measurements potentially lead to systematic errors. In this talk, typical error sources in dimensional X-ray CT are discussed...

  10. Characterization of electromagnetic fields in the αSPECTspectrometer and reduction of systematic errors

    International Nuclear Information System (INIS)

    Ayala Guardia, Fidel

    2011-10-01

    The aSPECT spectrometer has been designed to measure, with high precision, the recoil proton spectrum of the free neutron decay. From this spectrum, the electron antineutrino angular correlation coefficient a can be extracted with high accuracy. The goal of the experiment is to determine the coefficient a with a total relative error smaller than 0.3%, well below the current literature value of 5%. First measurements with the aSPECT spectrometer were performed in the Forschungs-Neutronenquelle Heinz Maier-Leibnitz in Munich. However, time-dependent background instabilities prevented us from reporting a new value of a. The contents of this thesis are based on the latest measurements performed with the aSPECT spectrometer at the Institut Laue-Langevin (ILL) in Grenoble, France. In these measurements, background instabilities were considerably reduced. Furthermore, diverse modifications intended to minimize systematic errors and to achieve a more reliable setup were successfully performed. Unfortunately, saturation effects of the detector electronics turned out to be too high to determine a meaningful result. However, this and other systematics were identified and decreased, or even eliminated, for future aSPECT beamtimes. The central part of this work is focused on the analysis and improvement of systematic errors related to the aSPECT electromagnetic fields. This work yielded in many improvements, particularly in the reduction of the systematic effects due to electric fields. The systematics related to the aSPECT magnetic field were also minimized and determined down to a level which permits to improve the present literature value of a. Furthermore, a custom NMR-magnetometer was developed and improved during this thesis, which will lead to reduction of magnetic field-related uncertainties down to a negligible level to determine a with a total relative error of at least 0.3%.

  11. Perceptual learning eases crowding by reducing recognition errors but not position errors.

    Science.gov (United States)

    Xiong, Ying-Zi; Yu, Cong; Zhang, Jun-Yun

    2015-08-01

    When an observer reports a letter flanked by additional letters in the visual periphery, the response errors (the crowding effect) may result from failure to recognize the target letter (recognition errors), from mislocating a correctly recognized target letter at a flanker location (target misplacement errors), or from reporting a flanker as the target letter (flanker substitution errors). Crowding can be reduced through perceptual learning. However, it is not known how perceptual learning operates to reduce crowding. In this study we trained observers with a partial-report task (Experiment 1), in which they reported the central target letter of a three-letter string presented in the visual periphery, or a whole-report task (Experiment 2), in which they reported all three letters in order. We then assessed the impact of training on recognition of both unflanked and flanked targets, with particular attention to how perceptual learning affected the types of errors. Our results show that training improved target recognition but not single-letter recognition, indicating that training indeed affected crowding. However, training did not reduce target misplacement errors or flanker substitution errors. This dissociation between target recognition and flanker substitution errors supports the view that flanker substitution may be more likely a by-product (due to response bias), rather than a cause, of crowding. Moreover, the dissociation is not consistent with hypothesized mechanisms of crowding that would predict reduced positional errors.

  12. Sources of variability and systematic error in mouse timing behavior.

    Science.gov (United States)

    Gallistel, C R; King, Adam; McDonald, Robert

    2004-01-01

    In the peak procedure, starts and stops in responding bracket the target time at which food is expected. The variability in start and stop times is proportional to the target time (scalar variability), as is the systematic error in the mean center (scalar error). The authors investigated the source of the error and the variability, using head poking in the mouse, with target intervals of 5 s, 15 s, and 45 s, in the standard procedure, and in a variant with 3 different target intervals at 3 different locations in a single trial. The authors conclude that the systematic error is due to the asymmetric location of start and stop decision criteria, and the scalar variability derives primarily from sources other than memory.

  13. SHERPA: A systematic human error reduction and prediction approach

    International Nuclear Information System (INIS)

    Embrey, D.E.

    1986-01-01

    This paper describes a Systematic Human Error Reduction and Prediction Approach (SHERPA) which is intended to provide guidelines for human error reduction and quantification in a wide range of human-machine systems. The approach utilizes as its basic current cognitive models of human performance. The first module in SHERPA performs task and human error analyses, which identify likely error modes, together with guidelines for the reduction of these errors by training, procedures and equipment redesign. The second module uses a SARAH approach to quantify the probability of occurrence of the errors identified earlier, and provides cost benefit analyses to assist in choosing the appropriate error reduction approaches in the third module

  14. The quality of systematic reviews about interventions for refractive error can be improved: a review of systematic reviews.

    Science.gov (United States)

    Mayo-Wilson, Evan; Ng, Sueko Matsumura; Chuck, Roy S; Li, Tianjing

    2017-09-05

    Systematic reviews should inform American Academy of Ophthalmology (AAO) Preferred Practice Pattern® (PPP) guidelines. The quality of systematic reviews related to the forthcoming Preferred Practice Pattern® guideline (PPP) Refractive Errors & Refractive Surgery is unknown. We sought to identify reliable systematic reviews to assist the AAO Refractive Errors & Refractive Surgery PPP. Systematic reviews were eligible if they evaluated the effectiveness or safety of interventions included in the 2012 PPP Refractive Errors & Refractive Surgery. To identify potentially eligible systematic reviews, we searched the Cochrane Eyes and Vision United States Satellite database of systematic reviews. Two authors identified eligible reviews and abstracted information about the characteristics and quality of the reviews independently using the Systematic Review Data Repository. We classified systematic reviews as "reliable" when they (1) defined criteria for the selection of studies, (2) conducted comprehensive literature searches for eligible studies, (3) assessed the methodological quality (risk of bias) of the included studies, (4) used appropriate methods for meta-analyses (which we assessed only when meta-analyses were reported), (5) presented conclusions that were supported by the evidence provided in the review. We identified 124 systematic reviews related to refractive error; 39 met our eligibility criteria, of which we classified 11 to be reliable. Systematic reviews classified as unreliable did not define the criteria for selecting studies (5; 13%), did not assess methodological rigor (10; 26%), did not conduct comprehensive searches (17; 44%), or used inappropriate quantitative methods (3; 8%). The 11 reliable reviews were published between 2002 and 2016. They included 0 to 23 studies (median = 9) and analyzed 0 to 4696 participants (median = 666). Seven reliable reviews (64%) assessed surgical interventions. Most systematic reviews of interventions for

  15. Correcting systematic errors in high-sensitivity deuteron polarization measurements

    Science.gov (United States)

    Brantjes, N. P. M.; Dzordzhadze, V.; Gebel, R.; Gonnella, F.; Gray, F. E.; van der Hoek, D. J.; Imig, A.; Kruithof, W. L.; Lazarus, D. M.; Lehrach, A.; Lorentz, B.; Messi, R.; Moricciani, D.; Morse, W. M.; Noid, G. A.; Onderwater, C. J. G.; Özben, C. S.; Prasuhn, D.; Levi Sandri, P.; Semertzidis, Y. K.; da Silva e Silva, M.; Stephenson, E. J.; Stockhorst, H.; Venanzoni, G.; Versolato, O. O.

    2012-02-01

    This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Jülich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10 -5 for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10 -6 in a search for an electric dipole moment using a storage ring.

  16. Correcting systematic errors in high-sensitivity deuteron polarization measurements

    Energy Technology Data Exchange (ETDEWEB)

    Brantjes, N.P.M. [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Dzordzhadze, V. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Gebel, R. [Institut fuer Kernphysik, Juelich Center for Hadron Physics, Forschungszentrum Juelich, D-52425 Juelich (Germany); Gonnella, F. [Physica Department of ' Tor Vergata' University, Rome (Italy); INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Gray, F.E. [Regis University, Denver, CO 80221 (United States); Hoek, D.J. van der [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Imig, A. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Kruithof, W.L. [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Lazarus, D.M. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Lehrach, A.; Lorentz, B. [Institut fuer Kernphysik, Juelich Center for Hadron Physics, Forschungszentrum Juelich, D-52425 Juelich (Germany); Messi, R. [Physica Department of ' Tor Vergata' University, Rome (Italy); INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Moricciani, D. [INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Morse, W.M. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Noid, G.A. [Indiana University Cyclotron Facility, Bloomington, IN 47408 (United States); and others

    2012-02-01

    This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Juelich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10{sup -5} for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10{sup -6} in a search for an electric dipole moment using a storage ring.

  17. Characterization of electromagnetic fields in the aSPECT spectrometer and reduction of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Ayala Guardia, Fidel

    2011-10-15

    The aSPECT spectrometer has been designed to measure, with high precision, the recoil proton spectrum of the free neutron decay. From this spectrum, the electron antineutrino angular correlation coefficient a can be extracted with high accuracy. The goal of the experiment is to determine the coefficient a with a total relative error smaller than 0.3%, well below the current literature value of 5%. First measurements with the aSPECT spectrometer were performed in the Forschungs-Neutronenquelle Heinz Maier-Leibnitz in Munich. However, time-dependent background instabilities prevented us from reporting a new value of a. The contents of this thesis are based on the latest measurements performed with the aSPECT spectrometer at the Institut Laue-Langevin (ILL) in Grenoble, France. In these measurements, background instabilities were considerably reduced. Furthermore, diverse modifications intended to minimize systematic errors and to achieve a more reliable setup were successfully performed. Unfortunately, saturation effects of the detector electronics turned out to be too high to determine a meaningful result. However, this and other systematics were identified and decreased, or even eliminated, for future aSPECT beamtimes. The central part of this work is focused on the analysis and improvement of systematic errors related to the aSPECT electromagnetic fields. This work yielded in many improvements, particularly in the reduction of the systematic effects due to electric fields. The systematics related to the aSPECT magnetic field were also minimized and determined down to a level which permits to improve the present literature value of a. Furthermore, a custom NMR-magnetometer was developed and improved during this thesis, which will lead to reduction of magnetic field-related uncertainties down to a negligible level to determine a with a total relative error of at least 0.3%.

  18. Seeing your error alters my pointing: observing systematic pointing errors induces sensori-motor after-effects.

    Directory of Open Access Journals (Sweden)

    Roberta Ronchi

    Full Text Available During the procedure of prism adaptation, subjects execute pointing movements to visual targets under a lateral optical displacement: as consequence of the discrepancy between visual and proprioceptive inputs, their visuo-motor activity is characterized by pointing errors. The perception of such final errors triggers error-correction processes that eventually result into sensori-motor compensation, opposite to the prismatic displacement (i.e., after-effects. Here we tested whether the mere observation of erroneous pointing movements, similar to those executed during prism adaptation, is sufficient to produce adaptation-like after-effects. Neurotypical participants observed, from a first-person perspective, the examiner's arm making incorrect pointing movements that systematically overshot visual targets location to the right, thus simulating a rightward optical deviation. Three classical after-effect measures (proprioceptive, visual and visual-proprioceptive shift were recorded before and after first-person's perspective observation of pointing errors. Results showed that mere visual exposure to an arm that systematically points on the right-side of a target (i.e., without error correction produces a leftward after-effect, which mostly affects the observer's proprioceptive estimation of her body midline. In addition, being exposed to such a constant visual error induced in the observer the illusion "to feel" the seen movement. These findings indicate that it is possible to elicit sensori-motor after-effects by mere observation of movement errors.

  19. Saccades to remembered target locations: an analysis of systematic and variable errors.

    Science.gov (United States)

    White, J M; Sparks, D L; Stanford, T R

    1994-01-01

    We studied the effects of varying delay interval on the accuracy and velocity of saccades to the remembered locations of visual targets. Remembered saccades were less accurate than control saccades. Both systematic and variable errors contributed to the loss of accuracy. Systematic errors were similar in size for delay intervals ranging from 400 msec to 5.6 sec, but variable errors increased monotonically as delay intervals were lengthened. Compared to control saccades, remembered saccades were slower and the peak velocities were more variable. However, neither peak velocity nor variability in peak velocity was related to the duration of the delay interval. Our findings indicate that a memory-related process is not the major source of the systematic errors observed on memory trials.

  20. Auto-calibration of Systematic Odometry Errors in Mobile Robots

    DEFF Research Database (Denmark)

    Bak, Martin; Larsen, Thomas Dall; Andersen, Nils Axel

    1999-01-01

    This paper describes the phenomenon of systematic errors in odometry models in mobile robots and looks at various ways of avoiding it by means of auto-calibration. The systematic errors considered are incorrect knowledge of the wheel base and the gains from encoder readings to wheel displacement....... By auto-calibration we mean a standardized procedure which estimates the uncertainties using only on-board equipment such as encoders, an absolute measurement system and filters; no intervention by operator or off-line data processing is necessary. Results are illustrated by a number of simulations...... and experiments on a mobile robot....

  1. RHIC susceptibility to variations in systematic magnetic harmonic errors

    International Nuclear Information System (INIS)

    Dell, G.F.; Peggs, S.; Pilat, F.; Satogata, T.; Tepikian, S.; Trbojevic, D.; Wei, J.

    1994-01-01

    Results of a study to determine the sensitivity of tune to uncertainties of the systematic magnetic harmonic errors in the 8 cm dipoles of RHIC are reported. Tolerances specified to the manufacturer for tooling and fabrication can result in systematic harmonics different from the expected values. Limits on the range of systematic harmonics have been established from magnet calculations, and the impact on tune from such harmonics has been established

  2. Applying lessons learned to enhance human performance and reduce human error for ISS operations

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, W.R.

    1998-09-01

    A major component of reliability, safety, and mission success for space missions is ensuring that the humans involved (flight crew, ground crew, mission control, etc.) perform their tasks and functions as required. This includes compliance with training and procedures during normal conditions, and successful compensation when malfunctions or unexpected conditions occur. A very significant issue that affects human performance in space flight is human error. Human errors can invalidate carefully designed equipment and procedures. If certain errors combine with equipment failures or design flaws, mission failure or loss of life can occur. The control of human error during operation of the International Space Station (ISS) will be critical to the overall success of the program. As experience from Mir operations has shown, human performance plays a vital role in the success or failure of long duration space missions. The Department of Energy`s Idaho National Engineering and Environmental Laboratory (INEEL) is developed a systematic approach to enhance human performance and reduce human errors for ISS operations. This approach is based on the systematic identification and evaluation of lessons learned from past space missions such as Mir to enhance the design and operation of ISS. This paper describes previous INEEL research on human error sponsored by NASA and how it can be applied to enhance human reliability for ISS.

  3. Systematic Review of Errors in Inhaler Use

    DEFF Research Database (Denmark)

    Sanchis, Joaquin; Gich, Ignasi; Pedersen, Søren

    2016-01-01

    in these outcomes over these 40 years and when partitioned into years 1 to 20 and years 21 to 40. Analyses were conducted in accordance with recommendations from Preferred Reporting Items for Systematic Reviews and Meta-Analyses and Strengthening the Reporting of Observational Studies in Epidemiology. Results Data...... A systematic search for articles reporting direct observation of inhaler technique by trained personnel covered the period from 1975 to 2014. Outcomes were the nature and frequencies of the three most common errors; the percentage of patients demonstrating correct, acceptable, or poor technique; and variations...

  4. On the effect of systematic errors in near real time accountancy

    International Nuclear Information System (INIS)

    Avenhaus, R.

    1987-01-01

    Systematic measurement errors have a decisive impact on nuclear materials accountancy. This has been demonstrated at various occasions for a fixed number of inventory periods, i.e. for situations where the overall probability of detection is taken as the measure of effectiveness. In the framework of Near Real Time Accountancy (NRTA), however, such analyses have not yet been performed. In this paper sequential test procedures are considered which are based on the so-called MUF-Residuals. It is shown that, if the decision maker does not know the systematic error variance, the average run lengths tend towards infinity if this variance is equal or longer than that of the random error. Furthermore, if the decision maker knows this invariance, the average run length for constant loss or diversion is not shorter than that without loss or diversion. These results cast some doubt on the present practice of data evaluation where systematic errors are tacitly assumed to persist for an infinite time. In fact, information about the time dependence of the variances of these errors has to be gathered in order that the efficiency of NRTA evaluation methods can be estimated realistically

  5. Numerical study of the systematic error in Monte Carlo schemes for semiconductors

    Energy Technology Data Exchange (ETDEWEB)

    Muscato, Orazio [Univ. degli Studi di Catania (Italy). Dipt. di Matematica e Informatica; Di Stefano, Vincenza [Univ. degli Studi di Messina (Italy). Dipt. di Matematica; Wagner, Wolfgang [Weierstrass-Institut fuer Angewandte Analysis und Stochastik (WIAS) im Forschungsverbund Berlin e.V. (Germany)

    2008-07-01

    The paper studies the convergence behavior of Monte Carlo schemes for semiconductors. A detailed analysis of the systematic error with respect to numerical parameters is performed. Different sources of systematic error are pointed out and illustrated in a spatially one-dimensional test case. The error with respect to the number of simulation particles occurs during the calculation of the internal electric field. The time step error, which is related to the splitting of transport and electric field calculations, vanishes sufficiently fast. The error due to the approximation of the trajectories of particles depends on the ODE solver used in the algorithm. It is negligible compared to the other sources of time step error, when a second order Runge-Kutta solver is used. The error related to the approximate scattering mechanism is the most significant source of error with respect to the time step. (orig.)

  6. Mapping the absolute magnetic field and evaluating the quadratic Zeeman-effect-induced systematic error in an atom interferometer gravimeter

    Science.gov (United States)

    Hu, Qing-Qing; Freier, Christian; Leykauf, Bastian; Schkolnik, Vladimir; Yang, Jun; Krutzik, Markus; Peters, Achim

    2017-09-01

    Precisely evaluating the systematic error induced by the quadratic Zeeman effect is important for developing atom interferometer gravimeters aiming at an accuracy in the μ Gal regime (1 μ Gal =10-8m /s2 ≈10-9g ). This paper reports on the experimental investigation of Raman spectroscopy-based magnetic field measurements and the evaluation of the systematic error in the gravimetric atom interferometer (GAIN) due to quadratic Zeeman effect. We discuss Raman duration and frequency step-size-dependent magnetic field measurement uncertainty, present vector light shift and tensor light shift induced magnetic field measurement offset, and map the absolute magnetic field inside the interferometer chamber of GAIN with an uncertainty of 0.72 nT and a spatial resolution of 12.8 mm. We evaluate the quadratic Zeeman-effect-induced gravity measurement error in GAIN as 2.04 μ Gal . The methods shown in this paper are important for precisely mapping the absolute magnetic field in vacuum and reducing the quadratic Zeeman-effect-induced systematic error in Raman transition-based precision measurements, such as atomic interferometer gravimeters.

  7. 'When measurements mean action' decision models for portal image review to eliminate systematic set-up errors

    International Nuclear Information System (INIS)

    Wratten, C.R.; Denham, J.W.; O; Brien, P.; Hamilton, C.S.; Kron, T.; London Regional Cancer Centre, London, Ontario

    2004-01-01

    The aim of the present paper is to evaluate how the use of decision models in the review of portal images can eliminate systematic set-up errors during conformal therapy. Sixteen patients undergoing four-field irradiation of prostate cancer have had daily portal images obtained during the first two treatment weeks and weekly thereafter. The magnitude of random and systematic variations has been calculated by comparison of the portal image with the reference simulator images using the two-dimensional decision model embodied in the Hotelling's evaluation process (HEP). Random day-to-day set-up variation was small in this group of patients. Systematic errors were, however, common. In 15 of 16 patients, one or more errors of >2 mm were diagnosed at some stage during treatment. Sixteen of the 23 errors were between 2 and 4 mm. Although there were examples of oversensitivity of the HEP in three cases, and one instance of undersensitivity, the HEP proved highly sensitive to the small (2-4 mm) systematic errors that must be eliminated during high precision radiotherapy. The HEP has proven valuable in diagnosing very small ( 4 mm) systematic errors using one-dimensional decision models, HEP can eliminate the majority of systematic errors during the first 2 treatment weeks. Copyright (2004) Blackwell Science Pty Ltd

  8. Medication Errors in the Southeast Asian Countries: A Systematic Review.

    Directory of Open Access Journals (Sweden)

    Shahrzad Salmasi

    Full Text Available Medication error (ME is a worldwide issue, but most studies on ME have been undertaken in developed countries and very little is known about ME in Southeast Asian countries. This study aimed systematically to identify and review research done on ME in Southeast Asian countries in order to identify common types of ME and estimate its prevalence in this region.The literature relating to MEs in Southeast Asian countries was systematically reviewed in December 2014 by using; Embase, Medline, Pubmed, ProQuest Central and the CINAHL. Inclusion criteria were studies (in any languages that investigated the incidence and the contributing factors of ME in patients of all ages.The 17 included studies reported data from six of the eleven Southeast Asian countries: five studies in Singapore, four in Malaysia, three in Thailand, three in Vietnam, one in the Philippines and one in Indonesia. There was no data on MEs in Brunei, Laos, Cambodia, Myanmar and Timor. Of the seventeen included studies, eleven measured administration errors, four focused on prescribing errors, three were done on preparation errors, three on dispensing errors and two on transcribing errors. There was only one study of reconciliation error. Three studies were interventional.The most frequently reported types of administration error were incorrect time, omission error and incorrect dose. Staff shortages, and hence heavy workload for nurses, doctor/nurse distraction, and misinterpretation of the prescription/medication chart, were identified as contributing factors of ME. There is a serious lack of studies on this topic in this region which needs to be addressed if the issue of ME is to be fully understood and addressed.

  9. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    Science.gov (United States)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  10. Reducing diagnostic errors in medicine: what's the goal?

    Science.gov (United States)

    Graber, Mark; Gordon, Ruthanna; Franklin, Nancy

    2002-10-01

    This review considers the feasibility of reducing or eliminating the three major categories of diagnostic errors in medicine: "No-fault errors" occur when the disease is silent, presents atypically, or mimics something more common. These errors will inevitably decline as medical science advances, new syndromes are identified, and diseases can be detected more accurately or at earlier stages. These errors can never be eradicated, unfortunately, because new diseases emerge, tests are never perfect, patients are sometimes noncompliant, and physicians will inevitably, at times, choose the most likely diagnosis over the correct one, illustrating the concept of necessary fallibility and the probabilistic nature of choosing a diagnosis. "System errors" play a role when diagnosis is delayed or missed because of latent imperfections in the health care system. These errors can be reduced by system improvements, but can never be eliminated because these improvements lag behind and degrade over time, and each new fix creates the opportunity for novel errors. Tradeoffs also guarantee system errors will persist, when resources are just shifted. "Cognitive errors" reflect misdiagnosis from faulty data collection or interpretation, flawed reasoning, or incomplete knowledge. The limitations of human processing and the inherent biases in using heuristics guarantee that these errors will persist. Opportunities exist, however, for improving the cognitive aspect of diagnosis by adopting system-level changes (e.g., second opinions, decision-support systems, enhanced access to specialists) and by training designed to improve cognition or cognitive awareness. Diagnostic error can be substantially reduced, but never eradicated.

  11. Tackling systematic errors in quantum logic gates with composite rotations

    International Nuclear Information System (INIS)

    Cummins, Holly K.; Llewellyn, Gavin; Jones, Jonathan A.

    2003-01-01

    We describe the use of composite rotations to combat systematic errors in single-qubit quantum logic gates and discuss three families of composite rotations which can be used to correct off-resonance and pulse length errors. Although developed and described within the context of nuclear magnetic resonance quantum computing, these sequences should be applicable to any implementation of quantum computation

  12. Using Analysis Increments (AI) to Estimate and Correct Systematic Errors in the Global Forecast System (GFS) Online

    Science.gov (United States)

    Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.

    2017-12-01

    Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub

  13. Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation

    International Nuclear Information System (INIS)

    Helgesson, P.; Sjöstrand, H.; Koning, A.J.; Rydén, J.; Rochman, D.; Alhassan, E.; Pomp, S.

    2016-01-01

    In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also

  14. Effects of averaging over motion and the resulting systematic errors in radiation therapy

    International Nuclear Information System (INIS)

    Evans, Philip M; Coolens, Catherine; Nioutsikou, Elena

    2006-01-01

    The potential for systematic errors in radiotherapy of a breathing patient is considered using the statistical model of Bortfeld et al (2002 Phys. Med. Biol. 47 2203-20). It is shown that although averaging over 30 fractions does result in a narrow Gaussian distribution of errors, as predicted by the central limit theorem, the fact that one or a few samples of the breathing patient's motion distribution are used for treatment planning (in contrast to the many treatment fractions that are likely to be delivered) may result in a much larger error with a systematic component. The error distribution may be particularly large if a scan at breath-hold is used for planning. (note)

  15. Complete Systematic Error Model of SSR for Sensor Registration in ATC Surveillance Networks.

    Science.gov (United States)

    Jarama, Ángel J; López-Araquistain, Jaime; Miguel, Gonzalo de; Besada, Juan A

    2017-09-21

    In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases) is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature). It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation.

  16. Multi-isocenter stereotactic radiotherapy: implications for target dose distributions of systematic and random localization errors

    International Nuclear Information System (INIS)

    Ebert, M.A.; Zavgorodni, S.F.; Kendrick, L.A.; Weston, S.; Harper, C.S.

    2001-01-01

    Purpose: This investigation examined the effect of alignment and localization errors on dose distributions in stereotactic radiotherapy (SRT) with arced circular fields. In particular, it was desired to determine the effect of systematic and random localization errors on multi-isocenter treatments. Methods and Materials: A research version of the FastPlan system from Surgical Navigation Technologies was used to generate a series of SRT plans of varying complexity. These plans were used to examine the influence of random setup errors by recalculating dose distributions with successive setup errors convolved into the off-axis ratio data tables used in the dose calculation. The influence of systematic errors was investigated by displacing isocenters from their planned positions. Results: For single-isocenter plans, it is found that the influences of setup error are strongly dependent on the size of the target volume, with minimum doses decreasing most significantly with increasing random and systematic alignment error. For multi-isocenter plans, similar variations in target dose are encountered, with this result benefiting from the conventional method of prescribing to a lower isodose value for multi-isocenter treatments relative to single-isocenter treatments. Conclusions: It is recommended that the systematic errors associated with target localization in SRT be tracked via a thorough quality assurance program, and that random setup errors be minimized by use of a sufficiently robust relocation system. These errors should also be accounted for by incorporating corrections into the treatment planning algorithm or, alternatively, by inclusion of sufficient margins in target definition

  17. Complete Systematic Error Model of SSR for Sensor Registration in ATC Surveillance Networks

    Directory of Open Access Journals (Sweden)

    Ángel J. Jarama

    2017-09-01

    Full Text Available In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature. It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation.

  18. Image pre-filtering for measurement error reduction in digital image correlation

    Science.gov (United States)

    Zhou, Yihao; Sun, Chen; Song, Yuntao; Chen, Jubing

    2015-02-01

    In digital image correlation, the sub-pixel intensity interpolation causes a systematic error in the measured displacements. The error increases toward high-frequency component of the speckle pattern. In practice, a captured image is usually corrupted by additive white noise. The noise introduces additional energy in the high frequencies and therefore raises the systematic error. Meanwhile, the noise also elevates the random error which increases with the noise power. In order to reduce the systematic error and the random error of the measurements, we apply a pre-filtering to the images prior to the correlation so that the high-frequency contents are suppressed. Two spatial-domain filters (binomial and Gaussian) and two frequency-domain filters (Butterworth and Wiener) are tested on speckle images undergoing both simulated and real-world translations. By evaluating the errors of the various combinations of speckle patterns, interpolators, noise levels, and filter configurations, we come to the following conclusions. All the four filters are able to reduce the systematic error. Meanwhile, the random error can also be reduced if the signal power is mainly distributed around DC. For high-frequency speckle patterns, the low-pass filters (binomial, Gaussian and Butterworth) slightly increase the random error and Butterworth filter produces the lowest random error among them. By using Wiener filter with over-estimated noise power, the random error can be reduced but the resultant systematic error is higher than that of low-pass filters. In general, Butterworth filter is recommended for error reduction due to its flexibility of passband selection and maximal preservation of the allowed frequencies. Binomial filter enables efficient implementation and thus becomes a good option if computational cost is a critical issue. While used together with pre-filtering, B-spline interpolator produces lower systematic error than bicubic interpolator and similar level of the random

  19. ANALYSIS AND CORRECTION OF SYSTEMATIC HEIGHT MODEL ERRORS

    Directory of Open Access Journals (Sweden)

    K. Jacobsen

    2016-06-01

    Full Text Available The geometry of digital height models (DHM determined with optical satellite stereo combinations depends upon the image orientation, influenced by the satellite camera, the system calibration and attitude registration. As standard these days the image orientation is available in form of rational polynomial coefficients (RPC. Usually a bias correction of the RPC based on ground control points is required. In most cases the bias correction requires affine transformation, sometimes only shifts, in image or object space. For some satellites and some cases, as caused by small base length, such an image orientation does not lead to the possible accuracy of height models. As reported e.g. by Yong-hua et al. 2015 and Zhang et al. 2015, especially the Chinese stereo satellite ZiYuan-3 (ZY-3 has a limited calibration accuracy and just an attitude recording of 4 Hz which may not be satisfying. Zhang et al. 2015 tried to improve the attitude based on the color sensor bands of ZY-3, but the color images are not always available as also detailed satellite orientation information. There is a tendency of systematic deformation at a Pléiades tri-stereo combination with small base length. The small base length enlarges small systematic errors to object space. But also in some other satellite stereo combinations systematic height model errors have been detected. The largest influence is the not satisfying leveling of height models, but also low frequency height deformations can be seen. A tilt of the DHM by theory can be eliminated by ground control points (GCP, but often the GCP accuracy and distribution is not optimal, not allowing a correct leveling of the height model. In addition a model deformation at GCP locations may lead to not optimal DHM leveling. Supported by reference height models better accuracy has been reached. As reference height model the Shuttle Radar Topography Mission (SRTM digital surface model (DSM or the new AW3D30 DSM, based on ALOS

  20. Errors as a Means of Reducing Impulsive Food Choice.

    Science.gov (United States)

    Sellitto, Manuela; di Pellegrino, Giuseppe

    2016-06-05

    Nowadays, the increasing incidence of eating disorders due to poor self-control has given rise to increased obesity and other chronic weight problems, and ultimately, to reduced life expectancy. The capacity to refrain from automatic responses is usually high in situations in which making errors is highly likely. The protocol described here aims at reducing imprudent preference in women during hypothetical intertemporal choices about appetitive food by associating it with errors. First, participants undergo an error task where two different edible stimuli are associated with two different error likelihoods (high and low). Second, they make intertemporal choices about the two edible stimuli, separately. As a result, this method decreases the discount rate for future amounts of the edible reward that cued higher error likelihood, selectively. This effect is under the influence of the self-reported hunger level. The present protocol demonstrates that errors, well known as motivationally salient events, can induce the recruitment of cognitive control, thus being ultimately useful in reducing impatient choices for edible commodities.

  1. Human-simulation-based learning to prevent medication error: A systematic review.

    Science.gov (United States)

    Sarfati, Laura; Ranchon, Florence; Vantard, Nicolas; Schwiertz, Vérane; Larbre, Virginie; Parat, Stéphanie; Faudel, Amélie; Rioufol, Catherine

    2018-01-31

    In the past 2 decades, there has been an increasing interest in simulation-based learning programs to prevent medication error (ME). To improve knowledge, skills, and attitudes in prescribers, nurses, and pharmaceutical staff, these methods enable training without directly involving patients. However, best practices for simulation for healthcare providers are as yet undefined. By analysing the current state of experience in the field, the present review aims to assess whether human simulation in healthcare helps to reduce ME. A systematic review was conducted on Medline from 2000 to June 2015, associating the terms "Patient Simulation," "Medication Errors," and "Simulation Healthcare." Reports of technology-based simulation were excluded, to focus exclusively on human simulation in nontechnical skills learning. Twenty-one studies assessing simulation-based learning programs were selected, focusing on pharmacy, medicine or nursing students, or concerning programs aimed at reducing administration or preparation errors, managing crises, or learning communication skills for healthcare professionals. The studies varied in design, methodology, and assessment criteria. Few demonstrated that simulation was more effective than didactic learning in reducing ME. This review highlights a lack of long-term assessment and real-life extrapolation, with limited scenarios and participant samples. These various experiences, however, help in identifying the key elements required for an effective human simulation-based learning program for ME prevention: ie, scenario design, debriefing, and perception assessment. The performance of these programs depends on their ability to reflect reality and on professional guidance. Properly regulated simulation is a good way to train staff in events that happen only exceptionally, as well as in standard daily activities. By integrating human factors, simulation seems to be effective in preventing iatrogenic risk related to ME, if the program is

  2. ac driving amplitude dependent systematic error in scanning Kelvin probe microscope measurements: Detection and correction

    International Nuclear Information System (INIS)

    Wu Yan; Shannon, Mark A.

    2006-01-01

    The dependence of the contact potential difference (CPD) reading on the ac driving amplitude in scanning Kelvin probe microscope (SKPM) hinders researchers from quantifying true material properties. We show theoretically and demonstrate experimentally that an ac driving amplitude dependence in the SKPM measurement can come from a systematic error, and it is common for all tip sample systems as long as there is a nonzero tracking error in the feedback control loop of the instrument. We further propose a methodology to detect and to correct the ac driving amplitude dependent systematic error in SKPM measurements. The true contact potential difference can be found by applying a linear regression to the measured CPD versus one over ac driving amplitude data. Two scenarios are studied: (a) when the surface being scanned by SKPM is not semiconducting and there is an ac driving amplitude dependent systematic error; (b) when a semiconductor surface is probed and asymmetric band bending occurs when the systematic error is present. Experiments are conducted using a commercial SKPM and CPD measurement results of two systems: platinum-iridium/gap/gold and platinum-iridium/gap/thermal oxide/silicon are discussed

  3. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-Area Sky Surveys

    Energy Technology Data Exchange (ETDEWEB)

    Li, T. S. [et al.

    2016-05-27

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is stable in time and uniform over the sky to 1% precision or better. Past surveys have achieved photometric precision of 1-2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors using photometry from the Dark Energy Survey (DES) as an example. We define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes, when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the systematic chromatic errors caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane, can be up to 2% in some bandpasses. We compare the calculated systematic chromatic errors with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput. The residual after correction is less than 0.3%. We also find that the errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.

  4. Error Mitigation in Computational Design of Sustainable Energy Materials

    DEFF Research Database (Denmark)

    Christensen, Rune

    by individual C=O bonds. Energy corrections applied to C=O bonds significantly reduce systematic errors and can be extended to adsorbates. A similar study is performed for intermediates in the oxygen evolution and oxygen reduction reactions. An identified systematic error on peroxide bonds is found to also...... be present in the OOH* adsorbate. However, the systematic error will almost be canceled by inclusion of van der Waals energy. The energy difference between key adsorbates is thus similar to that previously found. Finally, a method is developed for error estimation in computationally inexpensive neural...

  5. Systematic investigation of SLC final focus tolerances to errors

    International Nuclear Information System (INIS)

    Napoly, O.

    1996-10-01

    In this paper we review the tolerances of the SLC final focus system. To calculate these tolerances we used the error analysis routine of the program FFADA which has been written to aid the design and the analysis of final focus systems for the future linear colliders. This routine, complete by S. Fartoukh, systematically reviews the errors generated by the geometric 6-d Euclidean displacements of each magnet as well as by the field errors (normal and skew) up to the sextipolar order. It calculates their effects on the orbit and the transfer matrix at the second order in the errors, thus including cross-talk between errors originating from two different magnets. It also translates these effects in terms of tolerance derived from spot size growth and luminosity loss. We have run the routine for the following set of beam IP parameters: σ * x = 2.1 μm; σ * x' = 300 μrd; σ * x = 1 mm; σ * y = 0.55 μm; σ * y' = 200 μrd; σ * b = 2 x 10 -3 . The resulting errors and tolerances are displayed in a series of histograms which are reproduced in this paper. (author)

  6. Impact of systematic errors on DVH parameters of different OAR and target volumes in Intracavitary Brachytherapy (ICBT)

    International Nuclear Information System (INIS)

    Mourya, Ankur; Singh, Gaganpreet; Kumar, Vivek; Oinam, Arun S.

    2016-01-01

    Aim of this study is to analyze the impact of systematic errors on DVH parameters of different OAR and Target volumes in intracavitary brachytherapy (ICBT). To quantify the changes in dose-volume histogram parameters due to systematic errors in applicator reconstruction of brachytherapy planning, known errors in catheter reconstructions have to be introduced in applicator coordinate system

  7. Medication errors in the Middle East countries: a systematic review of the literature.

    Science.gov (United States)

    Alsulami, Zayed; Conroy, Sharon; Choonara, Imti

    2013-04-01

    Medication errors are a significant global concern and can cause serious medical consequences for patients. Little is known about medication errors in Middle Eastern countries. The objectives of this systematic review were to review studies of the incidence and types of medication errors in Middle Eastern countries and to identify the main contributory factors involved. A systematic review of the literature related to medication errors in Middle Eastern countries was conducted in October 2011 using the following databases: Embase, Medline, Pubmed, the British Nursing Index and the Cumulative Index to Nursing & Allied Health Literature. The search strategy included all ages and languages. Inclusion criteria were that the studies assessed or discussed the incidence of medication errors and contributory factors to medication errors during the medication treatment process in adults or in children. Forty-five studies from 10 of the 15 Middle Eastern countries met the inclusion criteria. Nine (20 %) studies focused on medication errors in paediatric patients. Twenty-one focused on prescribing errors, 11 measured administration errors, 12 were interventional studies and one assessed transcribing errors. Dispensing and documentation errors were inadequately evaluated. Error rates varied from 7.1 % to 90.5 % for prescribing and from 9.4 % to 80 % for administration. The most common types of prescribing errors reported were incorrect dose (with an incidence rate from 0.15 % to 34.8 % of prescriptions), wrong frequency and wrong strength. Computerised physician rder entry and clinical pharmacist input were the main interventions evaluated. Poor knowledge of medicines was identified as a contributory factor for errors by both doctors (prescribers) and nurses (when administering drugs). Most studies did not assess the clinical severity of the medication errors. Studies related to medication errors in the Middle Eastern countries were relatively few in number and of poor quality

  8. An Examination of the Spatial Distribution of Carbon Dioxide and Systematic Errors

    Science.gov (United States)

    Coffey, Brennan; Gunson, Mike; Frankenberg, Christian; Osterman, Greg

    2011-01-01

    The industrial period and modern age is characterized by combustion of coal, oil, and natural gas for primary energy and transportation leading to rising levels of atmospheric of CO2. This increase, which is being carefully measured, has ramifications throughout the biological world. Through remote sensing, it is possible to measure how many molecules of CO2 lie in a defined column of air. However, other gases and particles are present in the atmosphere, such as aerosols and water, which make such measurements more complicated1. Understanding the detailed geometry and path length of the observation is vital to computing the concentration of CO2. Comparing these satellite readings with ground-truth data (TCCON) the systematic errors arising from these sources can be assessed. Once the error is understood, it can be scaled for in the retrieval algorithms to create a set of data, which is closer to the TCCON measurements1. Using this process, the algorithms are being developed to reduce bias, within.1% worldwide of the true value. At this stage, the accuracy is within 1%, but through correcting small errors contained in the algorithms, such as accounting for the scattering of sunlight, the desired accuracy can be achieved.

  9. On the Source of the Systematic Errors in the Quatum Mechanical Calculation of the Superheavy Elements

    Directory of Open Access Journals (Sweden)

    Khazan A.

    2010-10-01

    Full Text Available It is shown that only the hyperbolic law of the Periodic Table of Elements allows the exact calculation for the atomic masses. The reference data of Periods 8 and 9 manifest a systematic error in the computer software applied to such a calculation (this systematic error increases with the number of the elements in the Table.

  10. On the Source of the Systematic Errors in the Quantum Mechanical Calculation of the Superheavy Elements

    Directory of Open Access Journals (Sweden)

    Khazan A.

    2010-10-01

    Full Text Available It is shown that only the hyperbolic law of the Periodic Table of Elements allows the exact calculation for the atomic masses. The reference data of Periods 8 and 9 manifest a systematic error in the computer software applied to such a calculation (this systematic error increases with the number of the elements in the Table.

  11. Systematic error in the precision measurement of the mean wavelength of a nearly monochromatic neutron beam due to geometric errors

    Energy Technology Data Exchange (ETDEWEB)

    Coakley, K.J., E-mail: kevin.coakley@nist.go [National Institute of Standards and Technology, 325 Broadway, Boulder, CO 80305 (United States); Dewey, M.S. [National Institute of Standards and Technology, Gaithersburg, MD (United States); Yue, A.T. [University of Tennessee, Knoxville, TN (United States); Laptev, A.B. [Tulane University, New Orleans, LA (United States)

    2009-12-11

    Many experiments at neutron scattering facilities require nearly monochromatic neutron beams. In such experiments, one must accurately measure the mean wavelength of the beam. We seek to reduce the systematic uncertainty of this measurement to approximately 0.1%. This work is motivated mainly by an effort to improve the measurement of the neutron lifetime determined from data collected in a 2003 in-beam experiment performed at NIST. More specifically, we seek to reduce systematic uncertainty by calibrating the neutron detector used in this lifetime experiment. This calibration requires simultaneous measurement of the responses of both the neutron detector used in the lifetime experiment and an absolute black neutron detector to a highly collimated nearly monochromatic beam of cold neutrons, as well as a separate measurement of the mean wavelength of the neutron beam. The calibration uncertainty will depend on the uncertainty of the measured efficiency of the black neutron detector and the uncertainty of the measured mean wavelength. The mean wavelength of the beam is measured by Bragg diffracting the beam from a nearly perfect silicon analyzer crystal. Given the rocking curve data and knowledge of the directions of the rocking axis and the normal to the scattering planes in the silicon crystal, one determines the mean wavelength of the beam. In practice, the direction of the rocking axis and the normal to the silicon scattering planes are not known exactly. Based on Monte Carlo simulation studies, we quantify systematic uncertainties in the mean wavelength measurement due to these geometric errors. Both theoretical and empirical results are presented and compared.

  12. ERESYE - a expert system for the evaluation of uncertainties related to systematic experimental errors

    International Nuclear Information System (INIS)

    Martinelli, T.; Panini, G.C.; Amoroso, A.

    1989-11-01

    Information about systematic errors are not given In EXFOR, the data base of nuclear experimental measurements: their assessment is committed to the ability of the evaluator. A tool Is needed which performs this task in a fully automatic way or, at least, gives a valuable aid. The expert system ERESYE has been implemented for investigating the feasibility of an automatic evaluation of the systematic errors in the experiments. The features of the project which led to the implementation of the system are presented. (author)

  13. Automation of Commanding at NASA: Reducing Human Error in Space Flight

    Science.gov (United States)

    Dorn, Sarah J.

    2010-01-01

    Automation has been implemented in many different industries to improve efficiency and reduce human error. Reducing or eliminating the human interaction in tasks has been proven to increase productivity in manufacturing and lessen the risk of mistakes by humans in the airline industry. Human space flight requires the flight controllers to monitor multiple systems and react quickly when failures occur so NASA is interested in implementing techniques that can assist in these tasks. Using automation to control some of these responsibilities could reduce the number of errors the flight controllers encounter due to standard human error characteristics. This paper will investigate the possibility of reducing human error in the critical area of manned space flight at NASA.

  14. Effects of systematic phase errors on optimized quantum random-walk search algorithm

    International Nuclear Information System (INIS)

    Zhang Yu-Chao; Bao Wan-Su; Wang Xiang; Fu Xiang-Qun

    2015-01-01

    This study investigates the effects of systematic errors in phase inversions on the success rate and number of iterations in the optimized quantum random-walk search algorithm. Using the geometric description of this algorithm, a model of the algorithm with phase errors is established, and the relationship between the success rate of the algorithm, the database size, the number of iterations, and the phase error is determined. For a given database size, we obtain both the maximum success rate of the algorithm and the required number of iterations when phase errors are present in the algorithm. Analyses and numerical simulations show that the optimized quantum random-walk search algorithm is more robust against phase errors than Grover’s algorithm. (paper)

  15. Electronic prescribing reduces prescribing error in public hospitals.

    Science.gov (United States)

    Shawahna, Ramzi; Rahman, Nisar-Ur; Ahmad, Mahmood; Debray, Marcel; Yliperttula, Marjo; Declèves, Xavier

    2011-11-01

    To examine the incidence of prescribing errors in a main public hospital in Pakistan and to assess the impact of introducing electronic prescribing system on the reduction of their incidence. Medication errors are persistent in today's healthcare system. The impact of electronic prescribing on reducing errors has not been tested in developing world. Prospective review of medication and discharge medication charts before and after the introduction of an electronic inpatient record and prescribing system. Inpatient records (n = 3300) and 1100 discharge medication sheets were reviewed for prescribing errors before and after the installation of electronic prescribing system in 11 wards. Medications (13,328 and 14,064) were prescribed for inpatients, among which 3008 and 1147 prescribing errors were identified, giving an overall error rate of 22·6% and 8·2% throughout paper-based and electronic prescribing, respectively. Medications (2480 and 2790) were prescribed for discharge patients, among which 418 and 123 errors were detected, giving an overall error rate of 16·9% and 4·4% during paper-based and electronic prescribing, respectively. Electronic prescribing has a significant effect on the reduction of prescribing errors. Prescribing errors are commonplace in Pakistan public hospitals. The study evaluated the impact of introducing electronic inpatient records and electronic prescribing in the reduction of prescribing errors in a public hospital in Pakistan. © 2011 Blackwell Publishing Ltd.

  16. Twice cutting method reduces tibial cutting error in unicompartmental knee arthroplasty.

    Science.gov (United States)

    Inui, Hiroshi; Taketomi, Shuji; Yamagami, Ryota; Sanada, Takaki; Tanaka, Sakae

    2016-01-01

    Bone cutting error can be one of the causes of malalignment in unicompartmental knee arthroplasty (UKA). The amount of cutting error in total knee arthroplasty has been reported. However, none have investigated cutting error in UKA. The purpose of this study was to reveal the amount of cutting error in UKA when open cutting guide was used and clarify whether cutting the tibia horizontally twice using the same cutting guide reduced the cutting errors in UKA. We measured the alignment of the tibial cutting guides, the first-cut cutting surfaces and the second cut cutting surfaces using the navigation system in 50 UKAs. Cutting error was defined as the angular difference between the cutting guide and cutting surface. The mean absolute first-cut cutting error was 1.9° (1.1° varus) in the coronal plane and 1.1° (0.6° anterior slope) in the sagittal plane, whereas the mean absolute second-cut cutting error was 1.1° (0.6° varus) in the coronal plane and 1.1° (0.4° anterior slope) in the sagittal plane. Cutting the tibia horizontally twice reduced the cutting errors in the coronal plane significantly (Pcutting the tibia horizontally twice using the same cutting guide reduced cutting error in the coronal plane. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Stereotype threat can reduce older adults' memory errors.

    Science.gov (United States)

    Barber, Sarah J; Mather, Mara

    2013-01-01

    Stereotype threat often incurs the cost of reducing the amount of information that older adults accurately recall. In the current research, we tested whether stereotype threat can also benefit memory. According to the regulatory focus account of stereotype threat, threat induces a prevention focus in which people become concerned with avoiding errors of commission and are sensitive to the presence or absence of losses within their environment. Because of this, we predicted that stereotype threat might reduce older adults' memory errors. Results were consistent with this prediction. Older adults under stereotype threat had lower intrusion rates during free-recall tests (Experiments 1 and 2). They also reduced their false alarms and adopted more conservative response criteria during a recognition test (Experiment 2). Thus, stereotype threat can decrease older adults' false memories, albeit at the cost of fewer veridical memories, as well.

  18. On the effects of systematic errors in analysis of nuclear scattering data

    International Nuclear Information System (INIS)

    Bennett, M.T.; Steward, C.; Amos, K.; Allen, L.J.

    1995-01-01

    The effects of systematic errors on elastic scattering differential cross-section data upon the assessment of quality fits to that data have been studied. Three cases are studied, namely the differential cross-section data sets from elastic scattering of 200 MeV protons from 12 C, of 350 MeV 16 O- 16 O scattering and of 288.6 MeV 12 C- 12 C scattering. First, to estimate the probability of any unknown systematic errors, select sets of data have been processed using the method of generalized cross validation; a method based upon the premise that any data set should satisfy an optimal smoothness criterion. In another case, the S function that provided a statistically significant fit to data, upon allowance for angle variation, became overdetermined. A far simpler S function form could then be found to describe the scattering process. The S functions so obtained have been used in a fixed energy inverse scattering study to specify effective, local, Schroedinger potentials for the collisions. An error analysis has been performed on the results to specify confidence levels for those interactions. 19 refs., 6 tabs., 15 figs

  19. Reducing waste and errors: piloting lean principles at Intermountain Healthcare.

    Science.gov (United States)

    Jimmerson, Cindy; Weber, Dorothy; Sobek, Durward K

    2005-05-01

    The Toyota Production System (TPS), based on industrial engineering principles and operational innovations, is used to achieve waste reduction and efficiency while increasing product quality. Several key tools and principles, adapted to health care, have proved effective in improving hospital operations. Value Stream Maps (VSMs), which represent the key people, material, and information flows required to deliver a product or service, distinguish between value-adding and non-value-adding steps. The one-page Problem-Solving A3 Report guides staff through a rigorous and systematic problem-solving process. PILOT PROJECT at INTERMOUNTAIN HEALTHCARE: In a pilot project, participants made many improvements, ranging from simple changes implemented immediately (for example, heart monitor paper not available when a patient presented with a dysrythmia) to larger projects involving patient or information flow issues across multiple departments. Most of the improvements required little or no investment and reduced significant amounts of wasted time for front-line workers. In one unit, turnaround time for pathologist reports from an anatomical pathology lab was reduced from five to two days. TPS principles and tools are applicable to an endless variety of processes and work settings in health care and can be used to address critical challenges such as medical errors, escalating costs, and staffing shortages.

  20. Design of roundness measurement model with multi-systematic error for cylindrical components with large radius.

    Science.gov (United States)

    Sun, Chuanzhi; Wang, Lei; Tan, Jiubin; Zhao, Bo; Tang, Yangchao

    2016-02-01

    The paper designs a roundness measurement model with multi-systematic error, which takes eccentricity, probe offset, radius of tip head of probe, and tilt error into account for roundness measurement of cylindrical components. The effects of the systematic errors and radius of components are analysed in the roundness measurement. The proposed method is built on the instrument with a high precision rotating spindle. The effectiveness of the proposed method is verified by experiment with the standard cylindrical component, which is measured on a roundness measuring machine. Compared to the traditional limacon measurement model, the accuracy of roundness measurement can be increased by about 2.2 μm using the proposed roundness measurement model for the object with a large radius of around 37 mm. The proposed method can improve the accuracy of roundness measurement and can be used for error separation, calibration, and comparison, especially for cylindrical components with a large radius.

  1. ASSESSMENT OF SYSTEMATIC CHROMATIC ERRORS THAT IMPACT SUB-1% PHOTOMETRIC PRECISION IN LARGE-AREA SKY SURVEYS

    Energy Technology Data Exchange (ETDEWEB)

    Li, T. S.; DePoy, D. L.; Marshall, J. L.; Boada, S.; Mondrik, N.; Nagasawa, D. [George P. and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy, and Department of Physics and Astronomy, Texas A and M University, College Station, TX 77843 (United States); Tucker, D.; Annis, J.; Finley, D. A.; Kent, S.; Lin, H.; Marriner, J.; Wester, W. [Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, IL 60510 (United States); Kessler, R.; Scolnic, D. [Kavli Institute for Cosmological Physics, University of Chicago, Chicago, IL 60637 (United States); Bernstein, G. M. [Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA 19104 (United States); Burke, D. L.; Rykoff, E. S. [SLAC National Accelerator Laboratory, Menlo Park, CA 94025 (United States); James, D. J.; Walker, A. R. [Cerro Tololo Inter-American Observatory, National Optical Astronomy Observatory, Casilla 603, La Serena (Chile); Collaboration: DES Collaboration; and others

    2016-06-01

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey’s stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. The residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for

  2. The Thirty Gigahertz Instrument Receiver for the QUIJOTE Experiment: Preliminary Polarization Measurements and Systematic-Error Analysis

    Directory of Open Access Journals (Sweden)

    Francisco J. Casas

    2015-08-01

    Full Text Available This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process.

  3. Reduced error signalling in medication-naive children with ADHD

    DEFF Research Database (Denmark)

    Plessen, Kerstin J; Allen, Elena A; Eichele, Heike

    2016-01-01

    reduced in children with ADHD. This adaptation was inversely related to activation of the right-lateralized ventral attention network (VAN) on error trials and to task-driven connectivity between the cingulo-opercular system and the VAN. LIMITATIONS: Our study was limited by the modest sample size......BACKGROUND: We examined the blood-oxygen level-dependent (BOLD) activation in brain regions that signal errors and their association with intraindividual behavioural variability and adaptation to errors in children with attention-deficit/hyperactivity disorder (ADHD). METHODS: We acquired...

  4. Constituent quarks and systematic errors in mid-rapidity charged multiplicity dNch/dη distributions

    Science.gov (United States)

    Tannenbaum, M. J.

    2018-01-01

    Centrality definition in A + A collisions at colliders such as RHIC and LHC suffers from a correlated systematic uncertainty caused by the efficiency of detecting a p + p collision (50 ± 5% for PHENIX at RHIC). In A + A collisions where centrality is measured by the number of nucleon collisions, Ncoll, or the number of nucleon participants, Npart, or the number of constituent quark participants, Nqp, the error in the efficiency of the primary interaction trigger (Beam-Beam Counters) for a p + p collision leads to a correlated systematic uncertainty in Npart, Ncoll or Nqp which reduces binomially as the A + A collisions become more central. If this is not correctly accounted for in projections of A + A to p + p collisions, then mistaken conclusions can result. A recent example is presented in whether the mid-rapidity charged multiplicity per constituent quark participant (dNch/dη)/Nqp in Au + Au at RHIC was the same as the value in p + p collisions.

  5. Systematic instrumental errors between oxygen saturation analysers in fetal blood during deep hypoxemia.

    Science.gov (United States)

    Porath, M; Sinha, P; Dudenhausen, J W; Luttkus, A K

    2001-05-01

    During a study of artificially produced deep hypoxemia in fetal cord blood, systematic errors of three different oxygen saturation analysers were evaluated against a reference CO oximeter. The oxygen tensions (PO2) of 83 pre-heparinized fetal blood samples from umbilical veins were reduced by tonometry to 1.3 kPa (10 mm Hg) and 2.7 kPa (20 mm Hg). The oxygen saturation (SO2) was determined (n=1328) on a reference CO oximeter (ABL625, Radiometer Copenhagen) and on three tested instruments (two CO oximeters: Chiron865, Bayer Diagnostics; ABL700, Radiometer Copenhagen, and a portable blood gas analyser, i-STAT, Abbott). The CO oximeters measure the oxyhemoglobin and the reduced hemoglobin fractions by absorption spectrophotometry. The i-STAT system calculates the oxygen saturation from the measured pH, PO2, and PCO2. The measurements were performed in duplicate. Statistical evaluation focused on the differences between duplicate measurements and on systematic instrumental errors in oxygen saturation analysis compared to the reference CO oximeter. After tonometry, the median saturation dropped to 32.9% at a PO2=2.7 kPa (20 mm Hg), defined as saturation range 1, and to 10% SO2 at a PO2=1.3 kPa (10 mm Hg), defined as range 2. With decreasing SO2, all devices showed an increased difference between duplicate measurements. ABL625 and ABL700 showed the closest agreement between instruments (0.25% SO2 bias at saturation range 1 and -0.33% SO2 bias at saturation range 2). Chiron865 indicated higher saturation values than ABL 625 (3.07% SO2 bias at saturation range 1 and 2.28% SO2 bias at saturation range 2). Calculated saturation values (i-STAT) were more than 30% lower than the measured values of ABL625. The disagreement among CO oximeters was small but increasing under deep hypoxemia. Calculation found unacceptably low saturation.

  6. Automated drug dispensing system reduces medication errors in an intensive care setting.

    Science.gov (United States)

    Chapuis, Claire; Roustit, Matthieu; Bal, Gaëlle; Schwebel, Carole; Pansu, Pascal; David-Tchouda, Sandra; Foroni, Luc; Calop, Jean; Timsit, Jean-François; Allenet, Benoît; Bosson, Jean-Luc; Bedouch, Pierrick

    2010-12-01

    We aimed to assess the impact of an automated dispensing system on the incidence of medication errors related to picking, preparation, and administration of drugs in a medical intensive care unit. We also evaluated the clinical significance of such errors and user satisfaction. Preintervention and postintervention study involving a control and an intervention medical intensive care unit. Two medical intensive care units in the same department of a 2,000-bed university hospital. Adult medical intensive care patients. After a 2-month observation period, we implemented an automated dispensing system in one of the units (study unit) chosen randomly, with the other unit being the control. The overall error rate was expressed as a percentage of total opportunities for error. The severity of errors was classified according to National Coordinating Council for Medication Error Reporting and Prevention categories by an expert committee. User satisfaction was assessed through self-administered questionnaires completed by nurses. A total of 1,476 medications for 115 patients were observed. After automated dispensing system implementation, we observed a reduced percentage of total opportunities for error in the study compared to the control unit (13.5% and 18.6%, respectively; perror (20.4% and 13.5%; perror showed a significant impact of the automated dispensing system in reducing preparation errors (perrors caused no harm (National Coordinating Council for Medication Error Reporting and Prevention category C). The automated dispensing system did not reduce errors causing harm. Finally, the mean for working conditions improved from 1.0±0.8 to 2.5±0.8 on the four-point Likert scale. The implementation of an automated dispensing system reduced overall medication errors related to picking, preparation, and administration of drugs in the intensive care unit. Furthermore, most nurses favored the new drug dispensation organization.

  7. Reducing patient identification errors related to glucose point-of-care testing

    Directory of Open Access Journals (Sweden)

    Gaurav Alreja

    2011-01-01

    Full Text Available Background: Patient identification (ID errors in point-of-care testing (POCT can cause test results to be transferred to the wrong patient′s chart or prevent results from being transmitted and reported. Despite the implementation of patient barcoding and ongoing operator training at our institution, patient ID errors still occur with glucose POCT. The aim of this study was to develop a solution to reduce identification errors with POCT. Materials and Methods: Glucose POCT was performed by approximately 2,400 clinical operators throughout our health system. Patients are identified by scanning in wristband barcodes or by manual data entry using portable glucose meters. Meters are docked to upload data to a database server which then transmits data to any medical record matching the financial number of the test result. With a new model, meters connect to an interface manager where the patient ID (a nine-digit account number is checked against patient registration data from admission, discharge, and transfer (ADT feeds and only matched results are transferred to the patient′s electronic medical record. With the new process, the patient ID is checked prior to testing, and testing is prevented until ID errors are resolved. Results: When averaged over a period of a month, ID errors were reduced to 3 errors/month (0.015% in comparison with 61.5 errors/month (0.319% before implementing the new meters. Conclusion: Patient ID errors may occur with glucose POCT despite patient barcoding. The verification of patient identification should ideally take place at the bedside before testing occurs so that the errors can be addressed in real time. The introduction of an ADT feed directly to glucose meters reduced patient ID errors in POCT.

  8. Investigating Systematic Errors of the Interstellar Flow Longitude Derived from the Pickup Ion Cutoff

    Science.gov (United States)

    Taut, A.; Berger, L.; Drews, C.; Bower, J.; Keilbach, D.; Lee, M. A.; Moebius, E.; Wimmer-Schweingruber, R. F.

    2017-12-01

    Complementary to the direct neutral particle measurements performed by e.g. IBEX, the measurement of PickUp Ions (PUIs) constitutes a diagnostic tool to investigate the local interstellar medium. PUIs are former neutral particles that have been ionized in the inner heliosphere. Subsequently, they are picked up by the solar wind and its frozen-in magnetic field. Due to this process, a characteristic Velocity Distribution Function (VDF) with a sharp cutoff evolves, which carries information about the PUI's injection speed and thus the former neutral particle velocity. The symmetry of the injection speed about the interstellar flow vector is used to derive the interstellar flow longitude from PUI measurements. Using He PUI data obtained by the PLASTIC sensor on STEREO A, we investigate how this concept may be affected by systematic errors. The PUI VDF strongly depends on the orientation of the local interplanetary magnetic field. Recently injected PUIs with speeds just below the cutoff speed typically form a highly anisotropic torus distribution in velocity space, which leads to a longitudinal transport for certain magnetic field orientation. Therefore, we investigate how the selection of magnetic field configurations in the data affects the result for the interstellar flow longitude that we derive from the PUI cutoff. Indeed, we find that the results follow a systematic trend with the filtered magnetic field angles that can lead to a shift of the result up to 5°. In turn, this means that every value for the interstellar flow longitude derived from the PUI cutoff is affected by a systematic error depending on the utilized magnetic field orientations. Here, we present our observations, discuss possible reasons for the systematic trend we discovered, and indicate selections that may minimize the systematic errors.

  9. Systematic literature review of hospital medication administration errors in children

    Directory of Open Access Journals (Sweden)

    Ameer A

    2015-11-01

    Full Text Available Ahmed Ameer,1 Soraya Dhillon,1 Mark J Peters,2 Maisoon Ghaleb11Department of Pharmacy, School of Life and Medical Sciences, University of Hertfordshire, Hatfield, UK; 2Paediatric Intensive Care Unit, Great Ormond Street Hospital, London, UK Objective: Medication administration is the last step in the medication process. It can act as a safety net to prevent unintended harm to patients if detected. However, medication administration errors (MAEs during this process have been documented and thought to be preventable. In pediatric medicine, doses are usually administered based on the child's weight or body surface area. This in turn increases the risk of drug miscalculations and therefore MAEs. The aim of this review is to report MAEs occurring in pediatric inpatients. Methods: Twelve bibliographic databases were searched for studies published between January 2000 and February 2015 using “medication administration errors”, “hospital”, and “children” related terminologies. Handsearching of relevant publications was also carried out. A second reviewer screened articles for eligibility and quality in accordance with the inclusion/exclusion criteria. Key findings: A total of 44 studies were systematically reviewed. MAEs were generally defined as a deviation of dose given from that prescribed; this included omitted doses and administration at the wrong time. Hospital MAEs in children accounted for a mean of 50% of all reported medication error reports (n=12,588. It was also identified in a mean of 29% of doses observed (n=8,894. The most prevalent type of MAEs related to preparation, infusion rate, dose, and time. This review has identified five types of interventions to reduce hospital MAEs in children: barcode medicine administration, electronic prescribing, education, use of smart pumps, and standard concentration. Conclusion: This review has identified a wide variation in the prevalence of hospital MAEs in children. This is attributed to

  10. Reduced error signalling in medication-naive children with ADHD

    DEFF Research Database (Denmark)

    Plessen, Kerstin J; Allen, Elena A; Eichele, Heike

    2016-01-01

    BACKGROUND: We examined the blood-oxygen level-dependent (BOLD) activation in brain regions that signal errors and their association with intraindividual behavioural variability and adaptation to errors in children with attention-deficit/hyperactivity disorder (ADHD). METHODS: We acquired...... functional MRI data during a Flanker task in medication-naive children with ADHD and healthy controls aged 8-12 years and analyzed the data using independent component analysis. For components corresponding to performance monitoring networks, we compared activations across groups and conditions...... and correlated them with reaction times (RT). Additionally, we analyzed post-error adaptations in behaviour and motor component activations. RESULTS: We included 25 children with ADHD and 29 controls in our analysis. Children with ADHD displayed reduced activation to errors in cingulo-opercular regions...

  11. Unaccounted source of systematic errors in measurements of the Newtonian gravitational constant G

    Science.gov (United States)

    DeSalvo, Riccardo

    2015-06-01

    Many precision measurements of G have produced a spread of results incompatible with measurement errors. Clearly an unknown source of systematic errors is at work. It is proposed here that most of the discrepancies derive from subtle deviations from Hooke's law, caused by avalanches of entangled dislocations. The idea is supported by deviations from linearity reported by experimenters measuring G, similarly to what is observed, on a larger scale, in low-frequency spring oscillators. Some mitigating experimental apparatus modifications are suggested.

  12. Modeling systematic errors: polychromatic sources of Beer-Lambert deviations in HPLC/UV and nonchromatographic spectrophotometric assays.

    Science.gov (United States)

    Galli, C

    2001-07-01

    It is well established that the use of polychromatic radiation in spectrophotometric assays leads to excursions from the Beer-Lambert limit. This Note models the resulting systematic error as a function of assay spectral width, slope of molecular extinction coefficient, and analyte concentration. The theoretical calculations are compared with recent experimental results; a parameter is introduced which can be used to estimate the magnitude of the systematic error in both chromatographic and nonchromatographic spectrophotometric assays. It is important to realize that the polychromatic radiation employed in common laboratory equipment can yield assay errors up to approximately 4%, even at absorption levels generally considered 'safe' (i.e. absorption <1). Thus careful consideration of instrumental spectral width, analyte concentration, and slope of molecular extinction coefficient is required to ensure robust analytical methods.

  13. Reduced phase error through optimized control of a superconducting qubit

    International Nuclear Information System (INIS)

    Lucero, Erik; Kelly, Julian; Bialczak, Radoslaw C.; Lenander, Mike; Mariantoni, Matteo; Neeley, Matthew; O'Connell, A. D.; Sank, Daniel; Wang, H.; Weides, Martin; Wenner, James; Cleland, A. N.; Martinis, John M.; Yamamoto, Tsuyoshi

    2010-01-01

    Minimizing phase and other errors in experimental quantum gates allows higher fidelity quantum processing. To quantify and correct for phase errors, in particular, we have developed an experimental metrology - amplified phase error (APE) pulses - that amplifies and helps identify phase errors in general multilevel qubit architectures. In order to correct for both phase and amplitude errors specific to virtual transitions and leakage outside of the qubit manifold, we implement 'half derivative', an experimental simplification of derivative reduction by adiabatic gate (DRAG) control theory. The phase errors are lowered by about a factor of five using this method to ∼1.6 deg. per gate, and can be tuned to zero. Leakage outside the qubit manifold, to the qubit |2> state, is also reduced to ∼10 -4 for 20% faster gates.

  14. End-point construction and systematic titration error in linear titration curves-complexation reactions

    NARCIS (Netherlands)

    Coenegracht, P.M.J.; Duisenberg, A.J.M.

    The systematic titration error which is introduced by the intersection of tangents to hyperbolic titration curves is discussed. The effects of the apparent (conditional) formation constant, of the concentration of the unknown component and of the ranges used for the end-point construction are

  15. The systematic error of temperature noise correlation measurement method and self-calibration

    International Nuclear Information System (INIS)

    Tian Hong; Tong Yunxian

    1993-04-01

    The turbulent transport behavior of fluid noise and the nature of noise affect on the velocity measurement system have been studied. The systematic error of velocity measurement system is analyzed. A theoretical calibration method is proposed, which makes the velocity measurement of time-correlation as an absolute measurement method. The theoretical results are in good agreement with experiments

  16. A correction method for systematic error in (1)H-NMR time-course data validated through stochastic cell culture simulation.

    Science.gov (United States)

    Sokolenko, Stanislav; Aucoin, Marc G

    2015-09-04

    The growing ubiquity of metabolomic techniques has facilitated high frequency time-course data collection for an increasing number of applications. While the concentration trends of individual metabolites can be modeled with common curve fitting techniques, a more accurate representation of the data needs to consider effects that act on more than one metabolite in a given sample. To this end, we present a simple algorithm that uses nonparametric smoothing carried out on all observed metabolites at once to identify and correct systematic error from dilution effects. In addition, we develop a simulation of metabolite concentration time-course trends to supplement available data and explore algorithm performance. Although we focus on nuclear magnetic resonance (NMR) analysis in the context of cell culture, a number of possible extensions are discussed. Realistic metabolic data was successfully simulated using a 4-step process. Starting with a set of metabolite concentration time-courses from a metabolomic experiment, each time-course was classified as either increasing, decreasing, concave, or approximately constant. Trend shapes were simulated from generic functions corresponding to each classification. The resulting shapes were then scaled to simulated compound concentrations. Finally, the scaled trends were perturbed using a combination of random and systematic errors. To detect systematic errors, a nonparametric fit was applied to each trend and percent deviations calculated at every timepoint. Systematic errors could be identified at time-points where the median percent deviation exceeded a threshold value, determined by the choice of smoothing model and the number of observed trends. Regardless of model, increasing the number of observations over a time-course resulted in more accurate error estimates, although the improvement was not particularly large between 10 and 20 samples per trend. The presented algorithm was able to identify systematic errors as small

  17. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  18. Systematic Procedural Error

    National Research Council Canada - National Science Library

    Byrne, Michael D

    2006-01-01

    .... This problem has received surprisingly little attention from cognitive psychologists. The research summarized here examines such errors in some detail both empirically and through computational cognitive modeling...

  19. Variation across mitochondrial gene trees provides evidence for systematic error: How much gene tree variation is biological?

    Science.gov (United States)

    Richards, Emilie J; Brown, Jeremy M; Barley, Anthony J; Chong, Rebecca A; Thomson, Robert C

    2018-02-19

    The use of large genomic datasets in phylogenetics has highlighted extensive topological variation across genes. Much of this discordance is assumed to result from biological processes. However, variation among gene trees can also be a consequence of systematic error driven by poor model fit, and the relative importance of biological versus methodological factors in explaining gene tree variation is a major unresolved question. Using mitochondrial genomes to control for biological causes of gene tree variation, we estimate the extent of gene tree discordance driven by systematic error and employ posterior prediction to highlight the role of model fit in producing this discordance. We find that the amount of discordance among mitochondrial gene trees is similar to the amount of discordance found in other studies that assume only biological causes of variation. This similarity suggests that the role of systematic error in generating gene tree variation is underappreciated and critical evaluation of fit between assumed models and the data used for inference is important for the resolution of unresolved phylogenetic questions.

  20. Combined influence of CT random noise and HU-RSP calibration curve nonlinearities on proton range systematic errors

    Science.gov (United States)

    Brousmiche, S.; Souris, K.; Orban de Xivry, J.; Lee, J. A.; Macq, B.; Seco, J.

    2017-11-01

    Proton range random and systematic uncertainties are the major factors undermining the advantages of proton therapy, namely, a sharp dose falloff and a better dose conformality for lower doses in normal tissues. The influence of CT artifacts such as beam hardening or scatter can easily be understood and estimated due to their large-scale effects on the CT image, like cupping and streaks. In comparison, the effects of weakly-correlated stochastic noise are more insidious and less attention is drawn on them partly due to the common belief that they only contribute to proton range uncertainties and not to systematic errors thanks to some averaging effects. A new source of systematic errors on the range and relative stopping powers (RSP) has been highlighted and proved not to be negligible compared to the 3.5% uncertainty reference value used for safety margin design. Hence, we demonstrate that the angular points in the HU-to-RSP calibration curve are an intrinsic source of proton range systematic error for typical levels of zero-mean stochastic CT noise. Systematic errors on RSP of up to 1% have been computed for these levels. We also show that the range uncertainty does not generally vary linearly with the noise standard deviation. We define a noise-dependent effective calibration curve that better describes, for a given material, the RSP value that is actually used. The statistics of the RSP and the range continuous slowing down approximation (CSDA) have been analytically derived for the general case of a calibration curve obtained by the stoichiometric calibration procedure. These models have been validated against actual CSDA simulations for homogeneous and heterogeneous synthetical objects as well as on actual patient CTs for prostate and head-and-neck treatment planning situations.

  1. Near field communications technology and the potential to reduce medication errors through multidisciplinary application.

    Science.gov (United States)

    O'Connell, Emer; Pegler, Joe; Lehane, Elaine; Livingstone, Vicki; McCarthy, Nora; Sahm, Laura J; Tabirca, Sabin; O'Driscoll, Aoife; Corrigan, Mark

    2016-01-01

    Patient safety requires optimal management of medications. Electronic systems are encouraged to reduce medication errors. Near field communications (NFC) is an emerging technology that may be used to develop novel medication management systems. An NFC-based system was designed to facilitate prescribing, administration and review of medications commonly used on surgical wards. Final year medical, nursing, and pharmacy students were recruited to test the electronic system in a cross-over observational setting on a simulated ward. Medication errors were compared against errors recorded using a paper-based system. A significant difference in the commission of medication errors was seen when NFC and paper-based medication systems were compared. Paper use resulted in a mean of 4.09 errors per prescribing round while NFC prescribing resulted in a mean of 0.22 errors per simulated prescribing round (P=0.000). Likewise, medication administration errors were reduced from a mean of 2.30 per drug round with a Paper system to a mean of 0.80 errors per round using NFC (PNFC based medication system may be used to effectively reduce medication errors in a simulated ward environment.

  2. Reducing errors benefits the field-based learning of a fundamental movement skill in children.

    Science.gov (United States)

    Capio, C M; Poolton, J M; Sit, C H P; Holmstrom, M; Masters, R S W

    2013-03-01

    Proficient fundamental movement skills (FMS) are believed to form the basis of more complex movement patterns in sports. This study examined the development of the FMS of overhand throwing in children through either an error-reduced (ER) or error-strewn (ES) training program. Students (n = 216), aged 8-12 years (M = 9.16, SD = 0.96), practiced overhand throwing in either a program that reduced errors during practice (ER) or one that was ES. ER program reduced errors by incrementally raising the task difficulty, while the ES program had an incremental lowering of task difficulty. Process-oriented assessment of throwing movement form (Test of Gross Motor Development-2) and product-oriented assessment of throwing accuracy (absolute error) were performed. Changes in performance were examined among children in the upper and lower quartiles of the pretest throwing accuracy scores. ER training participants showed greater gains in movement form and accuracy, and performed throwing more effectively with a concurrent secondary cognitive task. Movement form improved among girls, while throwing accuracy improved among children with low ability. Reduced performance errors in FMS training resulted in greater learning than a program that did not restrict errors. Reduced cognitive processing costs (effective dual-task performance) associated with such approach suggest its potential benefits for children with developmental conditions. © 2011 John Wiley & Sons A/S.

  3. The application of SHERPA (Systematic Human Error Reduction and Prediction Approach) in the development of compensatory cognitive rehabilitation strategies for stroke patients with left and right brain damage.

    Science.gov (United States)

    Hughes, Charmayne M L; Baber, Chris; Bienkiewicz, Marta; Worthington, Andrew; Hazell, Alexa; Hermsdörfer, Joachim

    2015-01-01

    Approximately 33% of stroke patients have difficulty performing activities of daily living, often committing errors during the planning and execution of such activities. The objective of this study was to evaluate the ability of the human error identification (HEI) technique SHERPA (Systematic Human Error Reduction and Prediction Approach) to predict errors during the performance of daily activities in stroke patients with left and right hemisphere lesions. Using SHERPA we successfully predicted 36 of the 38 observed errors, with analysis indicating that the proportion of predicted and observed errors was similar for all sub-tasks and severity levels. HEI results were used to develop compensatory cognitive strategies that clinicians could employ to reduce or prevent errors from occurring. This study provides evidence for the reliability and validity of SHERPA in the design of cognitive rehabilitation strategies in stroke populations.

  4. Personnel β-dosimetry method for reducing energy dependence

    International Nuclear Information System (INIS)

    Gesell, T.F.; Jones, D.E.; Gupta, V.P.; Kalbeitzer, F.L.; Cusimano, J.P.

    1979-03-01

    Current practices for the measurement of skin dose are reviewed and found to be inadequate. The current INEL dosimeter was examined for systematic and random error. Systematic (i.e., variation with energy) error was found to range over a factor of 10 while the random error was reasonably small for the larger β/γ ratios. Other designs with thicker windows as is more common would show even larger systematic errors. Various methods for improving beta dosimetry were reviewed. A new dosimeter design utilizing three chips, each having a window of difference thickness, was proposed. According to the calculations, this dosimeter should markedly reduce systematic error but will introduce somewhat more random error. Preliminary measurements were carried out related to angular dependence and charged particle equilibrium. The proposed dosimeter design was tested with betas from a uranium slab. The average of the seven results was in excellent agreement with the known dose rate and the standard deviation of the result was 16%

  5. Insights on the impact of systematic model errors on data assimilation performance in changing catchments

    Science.gov (United States)

    Pathiraja, S.; Anghileri, D.; Burlando, P.; Sharma, A.; Marshall, L.; Moradkhani, H.

    2018-03-01

    The global prevalence of rapid and extensive land use change necessitates hydrologic modelling methodologies capable of handling non-stationarity. This is particularly true in the context of Hydrologic Forecasting using Data Assimilation. Data Assimilation has been shown to dramatically improve forecast skill in hydrologic and meteorological applications, although such improvements are conditional on using bias-free observations and model simulations. A hydrologic model calibrated to a particular set of land cover conditions has the potential to produce biased simulations when the catchment is disturbed. This paper sheds new light on the impacts of bias or systematic errors in hydrologic data assimilation, in the context of forecasting in catchments with changing land surface conditions and a model calibrated to pre-change conditions. We posit that in such cases, the impact of systematic model errors on assimilation or forecast quality is dependent on the inherent prediction uncertainty that persists even in pre-change conditions. Through experiments on a range of catchments, we develop a conceptual relationship between total prediction uncertainty and the impacts of land cover changes on the hydrologic regime to demonstrate how forecast quality is affected when using state estimation Data Assimilation with no modifications to account for land cover changes. This work shows that systematic model errors as a result of changing or changed catchment conditions do not always necessitate adjustments to the modelling or assimilation methodology, for instance through re-calibration of the hydrologic model, time varying model parameters or revised offline/online bias estimation.

  6. Effect of patient setup errors on simultaneously integrated boost head and neck IMRT treatment plans

    International Nuclear Information System (INIS)

    Siebers, Jeffrey V.; Keall, Paul J.; Wu Qiuwen; Williamson, Jeffrey F.; Schmidt-Ullrich, Rupert K.

    2005-01-01

    . Combined random and systematic dose errors with σ = Σ = 3.0 mm resulted in more than 50% of plans having at least a 3% dose error and 38% of the plans having at least a 5% dose error. Evaluation with respect to a 3-mm expanded PTV reduced the observed dose deviations greater than 5% for the σ = Σ = 3.0 mm simulations to 5.4% of the plans simulated. Conclusions: Head-and-neck SIB-IMRT dosimetric accuracy would benefit from methods to reduce patient systematic setup errors. When GTV, CTV, or nodal volumes are used for dose evaluation, plans simulated including the effects of random and systematic errors deviate substantially from the nominal plan. The use of PTVs for dose evaluation in the nominal plan improves agreement with evaluated GTV, CTV, and nodal dose values under simulated setup errors. PTV concepts should be used for SIB-IMRT head-and-neck squamous cell carcinoma patients, although the size of the margins may be less than those used with three-dimensional conformal radiation therapy

  7. SNP discovery in nonmodel organisms: strand bias and base-substitution errors reduce conversion rates.

    Science.gov (United States)

    Gonçalves da Silva, Anders; Barendse, William; Kijas, James W; Barris, Wes C; McWilliam, Sean; Bunch, Rowan J; McCullough, Russell; Harrison, Blair; Hoelzel, A Rus; England, Phillip R

    2015-07-01

    Single nucleotide polymorphisms (SNPs) have become the marker of choice for genetic studies in organisms of conservation, commercial or biological interest. Most SNP discovery projects in nonmodel organisms apply a strategy for identifying putative SNPs based on filtering rules that account for random sequencing errors. Here, we analyse data used to develop 4723 novel SNPs for the commercially important deep-sea fish, orange roughy (Hoplostethus atlanticus), to assess the impact of not accounting for systematic sequencing errors when filtering identified polymorphisms when discovering SNPs. We used SAMtools to identify polymorphisms in a velvet assembly of genomic DNA sequence data from seven individuals. The resulting set of polymorphisms were filtered to minimize 'bycatch'-polymorphisms caused by sequencing or assembly error. An Illumina Infinium SNP chip was used to genotype a final set of 7714 polymorphisms across 1734 individuals. Five predictors were examined for their effect on the probability of obtaining an assayable SNP: depth of coverage, number of reads that support a variant, polymorphism type (e.g. A/C), strand-bias and Illumina SNP probe design score. Our results indicate that filtering out systematic sequencing errors could substantially improve the efficiency of SNP discovery. We show that BLASTX can be used as an efficient tool to identify single-copy genomic regions in the absence of a reference genome. The results have implications for research aiming to identify assayable SNPs and build SNP genotyping assays for nonmodel organisms. © 2014 John Wiley & Sons Ltd.

  8. The sensitivity of patient specific IMRT QC to systematic MLC leaf bank offset errors

    International Nuclear Information System (INIS)

    Rangel, Alejandra; Palte, Gesa; Dunscombe, Peter

    2010-01-01

    Purpose: Patient specific IMRT QC is performed routinely in many clinics as a safeguard against errors and inaccuracies which may be introduced during the complex planning, data transfer, and delivery phases of this type of treatment. The purpose of this work is to evaluate the feasibility of detecting systematic errors in MLC leaf bank position with patient specific checks. Methods: 9 head and neck (H and N) and 14 prostate IMRT beams were delivered using MLC files containing systematic offsets (±1 mm in two banks, ±0.5 mm in two banks, and 1 mm in one bank of leaves). The beams were measured using both MAPCHECK (Sun Nuclear Corp., Melbourne, FL) and the aS1000 electronic portal imaging device (Varian Medical Systems, Palo Alto, CA). Comparisons with calculated fields, without offsets, were made using commonly adopted criteria including absolute dose (AD) difference, relative dose difference, distance to agreement (DTA), and the gamma index. Results: The criteria most sensitive to systematic leaf bank offsets were the 3% AD, 3 mm DTA for MAPCHECK and the gamma index with 2% AD and 2 mm DTA for the EPID. The criterion based on the relative dose measurements was the least sensitive to MLC offsets. More highly modulated fields, i.e., H and N, showed greater changes in the percentage of passing points due to systematic MLC inaccuracy than prostate fields. Conclusions: None of the techniques or criteria tested is sufficiently sensitive, with the population of IMRT fields, to detect a systematic MLC offset at a clinically significant level on an individual field. Patient specific QC cannot, therefore, substitute for routine QC of the MLC itself.

  9. The sensitivity of patient specific IMRT QC to systematic MLC leaf bank offset errors

    Energy Technology Data Exchange (ETDEWEB)

    Rangel, Alejandra; Palte, Gesa; Dunscombe, Peter [Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2, Canada and Department of Physics and Astronomy, University of Calgary, 2500 University Drive North West, Calgary, Alberta T2N 1N4 (Canada); Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada); Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada); Department of Physics and Astronomy, University of Calgary, 2500 University Drive NW, Calgary, Alberta T2N 1N4 (Canada) and Department of Oncology, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada)

    2010-07-15

    Purpose: Patient specific IMRT QC is performed routinely in many clinics as a safeguard against errors and inaccuracies which may be introduced during the complex planning, data transfer, and delivery phases of this type of treatment. The purpose of this work is to evaluate the feasibility of detecting systematic errors in MLC leaf bank position with patient specific checks. Methods: 9 head and neck (H and N) and 14 prostate IMRT beams were delivered using MLC files containing systematic offsets ({+-}1 mm in two banks, {+-}0.5 mm in two banks, and 1 mm in one bank of leaves). The beams were measured using both MAPCHECK (Sun Nuclear Corp., Melbourne, FL) and the aS1000 electronic portal imaging device (Varian Medical Systems, Palo Alto, CA). Comparisons with calculated fields, without offsets, were made using commonly adopted criteria including absolute dose (AD) difference, relative dose difference, distance to agreement (DTA), and the gamma index. Results: The criteria most sensitive to systematic leaf bank offsets were the 3% AD, 3 mm DTA for MAPCHECK and the gamma index with 2% AD and 2 mm DTA for the EPID. The criterion based on the relative dose measurements was the least sensitive to MLC offsets. More highly modulated fields, i.e., H and N, showed greater changes in the percentage of passing points due to systematic MLC inaccuracy than prostate fields. Conclusions: None of the techniques or criteria tested is sufficiently sensitive, with the population of IMRT fields, to detect a systematic MLC offset at a clinically significant level on an individual field. Patient specific QC cannot, therefore, substitute for routine QC of the MLC itself.

  10. Analysis of errors in forensic science

    Directory of Open Access Journals (Sweden)

    Mingxiao Du

    2017-01-01

    Full Text Available Reliability of expert testimony is one of the foundations of judicial justice. Both expert bias and scientific errors affect the reliability of expert opinion, which in turn affects the trustworthiness of the findings of fact in legal proceedings. Expert bias can be eliminated by replacing experts; however, it may be more difficult to eliminate scientific errors. From the perspective of statistics, errors in operation of forensic science include systematic errors, random errors, and gross errors. In general, process repetition and abiding by the standard ISO/IEC:17025: 2005, general requirements for the competence of testing and calibration laboratories, during operation are common measures used to reduce errors that originate from experts and equipment, respectively. For example, to reduce gross errors, the laboratory can ensure that a test is repeated several times by different experts. In applying for forensic principles and methods, the Federal Rules of Evidence 702 mandate that judges consider factors such as peer review, to ensure the reliability of the expert testimony. As the scientific principles and methods may not undergo professional review by specialists in a certain field, peer review serves as an exclusive standard. This study also examines two types of statistical errors. As false-positive errors involve a higher possibility of an unfair decision-making, they should receive more attention than false-negative errors.

  11. Probabilistic error bounds for reduced order modeling

    Energy Technology Data Exchange (ETDEWEB)

    Abdo, M.G.; Wang, C.; Abdel-Khalik, H.S., E-mail: abdo@purdue.edu, E-mail: wang1730@purdue.edu, E-mail: abdelkhalik@purdue.edu [Purdue Univ., School of Nuclear Engineering, West Lafayette, IN (United States)

    2015-07-01

    Reduced order modeling has proven to be an effective tool when repeated execution of reactor analysis codes is required. ROM operates on the assumption that the intrinsic dimensionality of the associated reactor physics models is sufficiently small when compared to the nominal dimensionality of the input and output data streams. By employing a truncation technique with roots in linear algebra matrix decomposition theory, ROM effectively discards all components of the input and output data that have negligible impact on reactor attributes of interest. This manuscript introduces a mathematical approach to quantify the errors resulting from the discarded ROM components. As supported by numerical experiments, the introduced analysis proves that the contribution of the discarded components could be upper-bounded with an overwhelmingly high probability. The reverse of this statement implies that the ROM algorithm can self-adapt to determine the level of the reduction needed such that the maximum resulting reduction error is below a given tolerance limit that is set by the user. (author)

  12. Reducing errors in the GRACE gravity solutions using regularization

    Science.gov (United States)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2012-09-01

    The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4

  13. Reducing Approximation Error in the Fourier Flexible Functional Form

    Directory of Open Access Journals (Sweden)

    Tristan D. Skolrud

    2017-12-01

    Full Text Available The Fourier Flexible form provides a global approximation to an unknown data generating process. In terms of limiting function specification error, this form is preferable to functional forms based on second-order Taylor series expansions. The Fourier Flexible form is a truncated Fourier series expansion appended to a second-order expansion in logarithms. By replacing the logarithmic expansion with a Box-Cox transformation, we show that the Fourier Flexible form can reduce approximation error by 25% on average in the tails of the data distribution. The new functional form allows for nested testing of a larger set of commonly implemented functional forms.

  14. Preventing statistical errors in scientific journals.

    NARCIS (Netherlands)

    Nuijten, M.B.

    2016-01-01

    There is evidence for a high prevalence of statistical reporting errors in psychology and other scientific fields. These errors display a systematic preference for statistically significant results, distorting the scientific literature. There are several possible causes for this systematic error

  15. A permutation test to analyse systematic bias and random measurement errors of medical devices via boosting location and scale models.

    Science.gov (United States)

    Mayr, Andreas; Schmid, Matthias; Pfahlberg, Annette; Uter, Wolfgang; Gefeller, Olaf

    2017-06-01

    Measurement errors of medico-technical devices can be separated into systematic bias and random error. We propose a new method to address both simultaneously via generalized additive models for location, scale and shape (GAMLSS) in combination with permutation tests. More precisely, we extend a recently proposed boosting algorithm for GAMLSS to provide a test procedure to analyse potential device effects on the measurements. We carried out a large-scale simulation study to provide empirical evidence that our method is able to identify possible sources of systematic bias as well as random error under different conditions. Finally, we apply our approach to compare measurements of skin pigmentation from two different devices in an epidemiological study.

  16. Reliability and Measurement Error of Tensiomyography to Assess Mechanical Muscle Function: A Systematic Review.

    Science.gov (United States)

    Martín-Rodríguez, Saúl; Loturco, Irineu; Hunter, Angus M; Rodríguez-Ruiz, David; Munguia-Izquierdo, Diego

    2017-12-01

    Martín-Rodríguez, S, Loturco, I, Hunter, AM, Rodríguez-Ruiz, D, and Munguia-Izquierdo, D. Reliability and measurement error of tensiomyography to assess mechanical muscle function: A systematic review. J Strength Cond Res 31(12): 3524-3536, 2017-Interest in studying mechanical skeletal muscle function through tensiomyography (TMG) has increased in recent years. This systematic review aimed to (a) report the reliability and measurement error of all TMG parameters (i.e., maximum radial displacement of the muscle belly [Dm], contraction time [Tc], delay time [Td], half-relaxation time [½ Tr], and sustained contraction time [Ts]) and (b) to provide critical reflection on how to perform accurate and appropriate measurements for informing clinicians, exercise professionals, and researchers. A comprehensive literature search was performed of the Pubmed, Scopus, Science Direct, and Cochrane databases up to July 2017. Eight studies were included in this systematic review. Meta-analysis could not be performed because of the low quality of the evidence of some studies evaluated. Overall, the review of the 9 studies involving 158 participants revealed high relative reliability (intraclass correlation coefficient [ICC]) for Dm (0.91-0.99); moderate-to-high ICC for Ts (0.80-0.96), Tc (0.70-0.98), and ½ Tr (0.77-0.93); and low-to-high ICC for Td (0.60-0.98), independently of the evaluated muscles. In addition, absolute reliability (coefficient of variation [CV]) was low for all TMG parameters except for ½ Tr (CV = >20%), whereas measurement error indexes were high for this parameter. In conclusion, this study indicates that 3 of the TMG parameters (Dm, Td, and Tc) are highly reliable, whereas ½ Tr demonstrate insufficient reliability, and thus should not be used in future studies.

  17. Evaluation of positioning errors of the patient using cone beam CT megavoltage; Evaluacion de errores de posicionamiento del paciente mediante Cone Beam CT de megavoltaje

    Energy Technology Data Exchange (ETDEWEB)

    Garcia Ruiz-Zorrilla, J.; Fernandez Leton, J. P.; Zucca Aparicio, D.; Perez Moreno, J. M.; Minambres Moro, A.

    2013-07-01

    Image-guided radiation therapy allows you to assess and fix the positioning of the patient in the treatment unit, thus reducing the uncertainties due to the positioning of the patient. This work assesses errors systematic and errors of randomness from the corrections made to a series of patients of different diseases through a protocol off line of cone beam CT (CBCT) megavoltage. (Author)

  18. SU-D-BRD-07: Evaluation of the Effectiveness of Statistical Process Control Methods to Detect Systematic Errors For Routine Electron Energy Verification

    International Nuclear Information System (INIS)

    Parker, S

    2015-01-01

    Purpose: To evaluate the ability of statistical process control methods to detect systematic errors when using a two dimensional (2D) detector array for routine electron beam energy verification. Methods: Electron beam energy constancy was measured using an aluminum wedge and a 2D diode array on four linear accelerators. Process control limits were established. Measurements were recorded in control charts and compared with both calculated process control limits and TG-142 recommended specification limits. The data was tested for normality, process capability and process acceptability. Additional measurements were recorded while systematic errors were intentionally introduced. Systematic errors included shifts in the alignment of the wedge, incorrect orientation of the wedge, and incorrect array calibration. Results: Control limits calculated for each beam were smaller than the recommended specification limits. Process capability and process acceptability ratios were greater than one in all cases. All data was normally distributed. Shifts in the alignment of the wedge were most apparent for low energies. The smallest shift (0.5 mm) was detectable using process control limits in some cases, while the largest shift (2 mm) was detectable using specification limits in only one case. The wedge orientation tested did not affect the measurements as this did not affect the thickness of aluminum over the detectors of interest. Array calibration dependence varied with energy and selected array calibration. 6 MeV was the least sensitive to array calibration selection while 16 MeV was the most sensitive. Conclusion: Statistical process control methods demonstrated that the data distribution was normally distributed, the process was capable of meeting specifications, and that the process was centered within the specification limits. Though not all systematic errors were distinguishable from random errors, process control limits increased the ability to detect systematic errors

  19. Current pulse: can a production system reduce medical errors in health care?

    Science.gov (United States)

    Printezis, Antonios; Gopalakrishnan, Mohan

    2007-01-01

    One of the reasons for rising health care costs is medical errors, a majority of which result from faulty systems and processes. Health care in the past has used process-based initiatives such as Total Quality Management, Continuous Quality Improvement, and Six Sigma to reduce errors. These initiatives to redesign health care, reduce errors, and improve overall efficiency and customer satisfaction have had moderate success. Current trend is to apply the successful Toyota Production System (TPS) to health care since its organizing principles have led to tremendous improvement in productivity and quality for Toyota and other businesses that have adapted them. This article presents insights on the effectiveness of TPS principles in health care and the challenges that lie ahead in successfully integrating this approach with other quality initiatives.

  20. Design of a real-time spectroscopic rotating compensator ellipsometer without systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Broch, Laurent, E-mail: laurent.broch@univ-lorraine.fr [Laboratoire de Chimie Physique-Approche Multi-echelle des Milieux Complexes (LCP-A2MC, EA 4632), Universite de Lorraine, 1 boulevard Arago CP 87811, F-57078 Metz Cedex 3 (France); Stein, Nicolas [Institut Jean Lamour, Universite de Lorraine, UMR 7198 CNRS, 1 boulevard Arago CP 87811, F-57078 Metz Cedex 3 (France); Zimmer, Alexandre [Laboratoire Interdisciplinaire Carnot de Bourgogne, UMR 6303 CNRS, Universite de Bourgogne, 9 avenue Alain Savary BP 47870, F-21078 Dijon Cedex (France); Battie, Yann; Naciri, Aotmane En [Laboratoire de Chimie Physique-Approche Multi-echelle des Milieux Complexes (LCP-A2MC, EA 4632), Universite de Lorraine, 1 boulevard Arago CP 87811, F-57078 Metz Cedex 3 (France)

    2014-11-28

    We describe a spectroscopic ellipsometer in the visible domain (400–800 nm) based on a rotating compensator technology using two detectors. The classical analyzer is replaced by a fixed Rochon birefringent beamsplitter which splits the incidence light wave into two perpendicularly polarized waves, one oriented at + 45° and the other one at − 45° according to the plane of incidence. Both emergent optical signals are analyzed by two identical CCD detectors which are synchronized by an optical encoder fixed on the shaft of the step-by-step motor of the compensator. The final spectrum is the result of the two averaged Ψ and Δ spectra acquired by both detectors. We show that Ψ and Δ spectra are acquired without systematic errors on a spectral range fixed from 400 to 800 nm. The acquisition time can be adjusted down to 25 ms. The setup was validated by monitoring the first steps of bismuth telluride film electrocrystallization. The results exhibit that induced experimental growth parameters, such as film thickness and volumic fraction of deposited material can be extracted with a better trueness. - Highlights: • High-speed rotating compensator ellipsometer equipped with 2 detectors. • Ellipsometric angles without systematic errors • In-situ monitoring of electrocrystallization of bismuth telluride thin layer • High-accuracy of fitted physical parameters.

  1. Systematic and random erros in lattice parameter determinations

    International Nuclear Information System (INIS)

    Nascimento, E.M.

    1980-01-01

    A new method is proposed for evaluation of diffraction data used in precise determination of lattice parameters. The method is based on separation and systematic erros on the diffraction angles level, where the randon part of erros is independent on the 0 angle. The separation is enable by assumption that the systematic part of erros depends on the 0 angle linearly. In that situation the high precision in lattice parameters determination is related more to reducing the randon errors content that to the presence of unremoved systematic errors. (Author) [pt

  2. Comparing Absolute Error with Squared Error for Evaluating Empirical Models of Continuous Variables: Compositions, Implications, and Consequences

    Science.gov (United States)

    Gao, J.

    2014-12-01

    Reducing modeling error is often a major concern of empirical geophysical models. However, modeling errors can be defined in different ways: When the response variable is continuous, the most commonly used metrics are squared (SQ) and absolute (ABS) errors. For most applications, ABS error is the more natural, but SQ error is mathematically more tractable, so is often used as a substitute with little scientific justification. Existing literature has not thoroughly investigated the implications of using SQ error in place of ABS error, especially not geospatially. This study compares the two metrics through the lens of bias-variance decomposition (BVD). BVD breaks down the expected modeling error of each model evaluation point into bias (systematic error), variance (model sensitivity), and noise (observation instability). It offers a way to probe the composition of various error metrics. I analytically derived the BVD of ABS error and compared it with the well-known SQ error BVD, and found that not only the two metrics measure the characteristics of the probability distributions of modeling errors differently, but also the effects of these characteristics on the overall expected error are different. Most notably, under SQ error all bias, variance, and noise increase expected error, while under ABS error certain parts of the error components reduce expected error. Since manipulating these subtractive terms is a legitimate way to reduce expected modeling error, SQ error can never capture the complete story embedded in ABS error. I then empirically compared the two metrics with a supervised remote sensing model for mapping surface imperviousness. Pair-wise spatially-explicit comparison for each error component showed that SQ error overstates all error components in comparison to ABS error, especially variance-related terms. Hence, substituting ABS error with SQ error makes model performance appear worse than it actually is, and the analyst would more likely accept a

  3. Galaxy Cluster Shapes and Systematic Errors in H_0 as Determined by the Sunyaev-Zel'dovich Effect

    Science.gov (United States)

    Sulkanen, Martin E.; Patel, Sandeep K.

    1998-01-01

    Imaging of the Sunyaev-Zeldovich (SZ) effect in galaxy clusters combined with cluster plasma x-ray diagnostics promises to measure the cosmic distance scale to high accuracy. However, projecting the inverse-Compton scattering and x-ray emission along the cluster line-of-sight will introduce systematic error's in the Hubble constant, H_0, because the true shape of the cluster is not known. In this paper we present a study of the systematic errors in the value of H_0, as determined by the x-ray and SZ properties of theoretical samples of triaxial isothermal "beta-model" clusters, caused by projection effects and observer orientation relative to the model clusters' principal axes. We calculate three estimates for H_0 for each cluster, based on their large and small apparent angular core radii, and their arithmetic mean. We average the estimates for H_0 for a sample of 25 clusters and find that the estimates have limited systematic error: the 99.7% confidence intervals for the mean estimated H_0 analyzing the clusters using either their large or mean angular core r;dius are within 14% of the "true" (assumed) value of H_0 (and enclose it), for a triaxial beta model cluster sample possessing a distribution of apparent x-ray cluster ellipticities consistent with that of observed x-ray clusters.

  4. Adverse Drug Events and Medication Errors in African Hospitals: A Systematic Review.

    Science.gov (United States)

    Mekonnen, Alemayehu B; Alhawassi, Tariq M; McLachlan, Andrew J; Brien, Jo-Anne E

    2018-03-01

    Medication errors and adverse drug events are universal problems contributing to patient harm but the magnitude of these problems in Africa remains unclear. The objective of this study was to systematically investigate the literature on the extent of medication errors and adverse drug events, and the factors contributing to medication errors in African hospitals. We searched PubMed, MEDLINE, EMBASE, Web of Science and Global Health databases from inception to 31 August, 2017 and hand searched the reference lists of included studies. Original research studies of any design published in English that investigated adverse drug events and/or medication errors in any patient population in the hospital setting in Africa were included. Descriptive statistics including median and interquartile range were presented. Fifty-one studies were included; of these, 33 focused on medication errors, 15 on adverse drug events, and three studies focused on medication errors and adverse drug events. These studies were conducted in nine (of the 54) African countries. In any patient population, the median (interquartile range) percentage of patients reported to have experienced any suspected adverse drug event at hospital admission was 8.4% (4.5-20.1%), while adverse drug events causing admission were reported in 2.8% (0.7-6.4%) of patients but it was reported that a median of 43.5% (20.0-47.0%) of the adverse drug events were deemed preventable. Similarly, the median mortality rate attributed to adverse drug events was reported to be 0.1% (interquartile range 0.0-0.3%). The most commonly reported types of medication errors were prescribing errors, occurring in a median of 57.4% (interquartile range 22.8-72.8%) of all prescriptions and a median of 15.5% (interquartile range 7.5-50.6%) of the prescriptions evaluated had dosing problems. Major contributing factors for medication errors reported in these studies were individual practitioner factors (e.g. fatigue and inadequate knowledge

  5. Approaches to reducing photon dose calculation errors near metal implants

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Jessie Y.; Followill, David S.; Howell, Rebecca M.; Mirkovic, Dragan; Kry, Stephen F., E-mail: sfkry@mdanderson.org [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and Graduate School of Biomedical Sciences, The University of Texas Health Science Center Houston, Houston, Texas 77030 (United States); Liu, Xinming [Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and Graduate School of Biomedical Sciences, The University of Texas Health Science Center Houston, Houston, Texas 77030 (United States); Stingo, Francesco C. [Department of Biostatistics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and Graduate School of Biomedical Sciences, The University of Texas Health Science Center Houston, Houston, Texas 77030 (United States)

    2016-09-15

    Purpose: Dose calculation errors near metal implants are caused by limitations of the dose calculation algorithm in modeling tissue/metal interface effects as well as density assignment errors caused by imaging artifacts. The purpose of this study was to investigate two strategies for reducing dose calculation errors near metal implants: implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) dose calculation method and use of metal artifact reduction methods for computed tomography (CT) imaging. Methods: Both error reduction strategies were investigated using a simple geometric slab phantom with a rectangular metal insert (composed of titanium or Cerrobend), as well as two anthropomorphic phantoms (one with spinal hardware and one with dental fillings), designed to mimic relevant clinical scenarios. To assess the dosimetric impact of metal kernels, the authors implemented titanium and silver kernels in a commercial collapsed cone C/S algorithm. To assess the impact of CT metal artifact reduction methods, the authors performed dose calculations using baseline imaging techniques (uncorrected 120 kVp imaging) and three commercial metal artifact reduction methods: Philips Healthcare’s O-MAR, GE Healthcare’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI with metal artifact reduction software (MARS) applied. For the simple geometric phantom, radiochromic film was used to measure dose upstream and downstream of metal inserts. For the anthropomorphic phantoms, ion chambers and radiochromic film were used to quantify the benefit of the error reduction strategies. Results: Metal kernels did not universally improve accuracy but rather resulted in better accuracy upstream of metal implants and decreased accuracy directly downstream. For the clinical cases (spinal hardware and dental fillings), metal kernels had very little impact on the dose calculation accuracy (<1.0%). Of the commercial CT artifact

  6. Approaches to reducing photon dose calculation errors near metal implants

    International Nuclear Information System (INIS)

    Huang, Jessie Y.; Followill, David S.; Howell, Rebecca M.; Mirkovic, Dragan; Kry, Stephen F.; Liu, Xinming; Stingo, Francesco C.

    2016-01-01

    Purpose: Dose calculation errors near metal implants are caused by limitations of the dose calculation algorithm in modeling tissue/metal interface effects as well as density assignment errors caused by imaging artifacts. The purpose of this study was to investigate two strategies for reducing dose calculation errors near metal implants: implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) dose calculation method and use of metal artifact reduction methods for computed tomography (CT) imaging. Methods: Both error reduction strategies were investigated using a simple geometric slab phantom with a rectangular metal insert (composed of titanium or Cerrobend), as well as two anthropomorphic phantoms (one with spinal hardware and one with dental fillings), designed to mimic relevant clinical scenarios. To assess the dosimetric impact of metal kernels, the authors implemented titanium and silver kernels in a commercial collapsed cone C/S algorithm. To assess the impact of CT metal artifact reduction methods, the authors performed dose calculations using baseline imaging techniques (uncorrected 120 kVp imaging) and three commercial metal artifact reduction methods: Philips Healthcare’s O-MAR, GE Healthcare’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI with metal artifact reduction software (MARS) applied. For the simple geometric phantom, radiochromic film was used to measure dose upstream and downstream of metal inserts. For the anthropomorphic phantoms, ion chambers and radiochromic film were used to quantify the benefit of the error reduction strategies. Results: Metal kernels did not universally improve accuracy but rather resulted in better accuracy upstream of metal implants and decreased accuracy directly downstream. For the clinical cases (spinal hardware and dental fillings), metal kernels had very little impact on the dose calculation accuracy (<1.0%). Of the commercial CT artifact

  7. Average beta-beating from random errors

    CERN Document Server

    Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department

    2018-01-01

    The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic effect on the tune.

  8. Carers' Medication Administration Errors in the Domiciliary Setting: A Systematic Review.

    Directory of Open Access Journals (Sweden)

    Anam Parand

    Full Text Available Medications are mostly taken in patients' own homes, increasingly administered by carers, yet studies of medication safety have been largely conducted in the hospital setting. We aimed to review studies of how carers cause and/or prevent medication administration errors (MAEs within the patient's home; to identify types, prevalence and causes of these MAEs and any interventions to prevent them.A narrative systematic review of literature published between 1 Jan 1946 and 23 Sep 2013 was carried out across the databases EMBASE, MEDLINE, PSYCHINFO, COCHRANE and CINAHL. Empirical studies were included where carers were responsible for preventing/causing MAEs in the home and standardised tools used for data extraction and quality assessment.Thirty-six papers met the criteria for narrative review, 33 of which included parents caring for children, two predominantly comprised adult children and spouses caring for older parents/partners, and one focused on paid carers mostly looking after older adults. The carer administration error rate ranged from 1.9 to 33% of medications administered and from 12 to 92.7% of carers administering medication. These included dosage errors, omitted administration, wrong medication and wrong time or route of administration. Contributory factors included individual carer factors (e.g. carer age, environmental factors (e.g. storage, medication factors (e.g. number of medicines, prescription communication factors (e.g. comprehensibility of instructions, psychosocial factors (e.g. carer-to-carer communication, and care-recipient factors (e.g. recipient age. The few interventions effective in preventing MAEs involved carer training and tailored equipment.This review shows that home medication administration errors made by carers are a potentially serious patient safety issue. Carers made similar errors to those made by professionals in other contexts and a wide variety of contributory factors were identified. The home care

  9. Evaluating IMRT and VMAT dose accuracy: Practical examples of failure to detect systematic errors when applying a commonly used metric and action levels

    Energy Technology Data Exchange (ETDEWEB)

    Nelms, Benjamin E. [Canis Lupus LLC, Merrimac, Wisconsin 53561 (United States); Chan, Maria F. [Memorial Sloan-Kettering Cancer Center, Basking Ridge, New Jersey 07920 (United States); Jarry, Geneviève; Lemire, Matthieu [Hôpital Maisonneuve-Rosemont, Montréal, QC H1T 2M4 (Canada); Lowden, John [Indiana University Health - Goshen Hospital, Goshen, Indiana 46526 (United States); Hampton, Carnell [Levine Cancer Institute/Carolinas Medical Center, Concord, North Carolina 28025 (United States); Feygelman, Vladimir [Moffitt Cancer Center, Tampa, Florida 33612 (United States)

    2013-11-15

    Purpose: This study (1) examines a variety of real-world cases where systematic errors were not detected by widely accepted methods for IMRT/VMAT dosimetric accuracy evaluation, and (2) drills-down to identify failure modes and their corresponding means for detection, diagnosis, and mitigation. The primary goal of detailing these case studies is to explore different, more sensitive methods and metrics that could be used more effectively for evaluating accuracy of dose algorithms, delivery systems, and QA devices.Methods: The authors present seven real-world case studies representing a variety of combinations of the treatment planning system (TPS), linac, delivery modality, and systematic error type. These case studies are typical to what might be used as part of an IMRT or VMAT commissioning test suite, varying in complexity. Each case study is analyzed according to TG-119 instructions for gamma passing rates and action levels for per-beam and/or composite plan dosimetric QA. Then, each case study is analyzed in-depth with advanced diagnostic methods (dose profile examination, EPID-based measurements, dose difference pattern analysis, 3D measurement-guided dose reconstruction, and dose grid inspection) and more sensitive metrics (2% local normalization/2 mm DTA and estimated DVH comparisons).Results: For these case studies, the conventional 3%/3 mm gamma passing rates exceeded 99% for IMRT per-beam analyses and ranged from 93.9% to 100% for composite plan dose analysis, well above the TG-119 action levels of 90% and 88%, respectively. However, all cases had systematic errors that were detected only by using advanced diagnostic techniques and more sensitive metrics. The systematic errors caused variable but noteworthy impact, including estimated target dose coverage loss of up to 5.5% and local dose deviations up to 31.5%. Types of errors included TPS model settings, algorithm limitations, and modeling and alignment of QA phantoms in the TPS. Most of the errors were

  10. Dosimetric impact of systematic MLC positional errors on step and shoot IMRT for prostate cancer: a planning study

    International Nuclear Information System (INIS)

    Ung, N.M.; Wee, L.; Harper, C.S.

    2010-01-01

    Full text: The positional accuracy of multi leaf collimators (MLC) is crucial in ensuring precise delivery of intensity-modulated radiotherapy (IMRT). The aim of this planning study was to investigate the dosimetric impact of systematic MLC errors on step and shoot IMRT of prostate cancer. Twelve MLC leaf banks perturbations were introduced to six prostate IMRT treatment plans to simulate MLC systematic errors. Dose volume histograms (OYHs) were generated for the extraction of dose endpoint parameters. Plans were evaluated in terms of changes to the defined endpoint dose parameters, conformity index (CI) and healthy tissue avoidance (HTA) to planning target volume (PTY), rectum and bladder. Negative perturbations of MLC had been found to produce greater changes to endpoint dose parameters than positive perturbations of MLC (p < 0.05). Negative and positive synchronized MLC perturbations of I mm resulted in median changes of -2.32 and 1.78%, respectively to 095% of PTY whereas asynchronized MLC perturbations of the same direction and magnitude resulted in median changes of 1.18 and 0.90%, respectively. Doses to rectum were generally more sensitive to systematic MLC errors compared to bladder. Synchronized MLC perturbations of I mm resulted in median changes of endpoint dose parameters to both rectum and bladder from about I to 3%. Maximum reduction of -4.44 and -7.29% were recorded for CI and HTA, respectively, due to synchronized MLC perturbation of I mm. In summary, MLC errors resulted in measurable amount of dose changes to PTY and surrounding critical structures in prostate LMRT. (author)

  11. Methods, analysis, and the treatment of systematic errors for the electron electric dipole moment search in thorium monoxide

    Science.gov (United States)

    Baron, J.; Campbell, W. C.; DeMille, D.; Doyle, J. M.; Gabrielse, G.; Gurevich, Y. V.; Hess, P. W.; Hutzler, N. R.; Kirilov, E.; Kozyryev, I.; O'Leary, B. R.; Panda, C. D.; Parsons, M. F.; Spaun, B.; Vutha, A. C.; West, A. D.; West, E. P.; ACME Collaboration

    2017-07-01

    We recently set a new limit on the electric dipole moment of the electron (eEDM) (J Baron et al and ACME collaboration 2014 Science 343 269-272), which represented an order-of-magnitude improvement on the previous limit and placed more stringent constraints on many charge-parity-violating extensions to the standard model. In this paper we discuss the measurement in detail. The experimental method and associated apparatus are described, together with the techniques used to isolate the eEDM signal. In particular, we detail the way experimental switches were used to suppress effects that can mimic the signal of interest. The methods used to search for systematic errors, and models explaining observed systematic errors, are also described. We briefly discuss possible improvements to the experiment.

  12. Cognitive emotion regulation enhances aversive prediction error activity while reducing emotional responses.

    Science.gov (United States)

    Mulej Bratec, Satja; Xie, Xiyao; Schmid, Gabriele; Doll, Anselm; Schilbach, Leonhard; Zimmer, Claus; Wohlschläger, Afra; Riedl, Valentin; Sorg, Christian

    2015-12-01

    Cognitive emotion regulation is a powerful way of modulating emotional responses. However, despite the vital role of emotions in learning, it is unknown whether the effect of cognitive emotion regulation also extends to the modulation of learning. Computational models indicate prediction error activity, typically observed in the striatum and ventral tegmental area, as a critical neural mechanism involved in associative learning. We used model-based fMRI during aversive conditioning with and without cognitive emotion regulation to test the hypothesis that emotion regulation would affect prediction error-related neural activity in the striatum and ventral tegmental area, reflecting an emotion regulation-related modulation of learning. Our results show that cognitive emotion regulation reduced emotion-related brain activity, but increased prediction error-related activity in a network involving ventral tegmental area, hippocampus, insula and ventral striatum. While the reduction of response activity was related to behavioral measures of emotion regulation success, the enhancement of prediction error-related neural activity was related to learning performance. Furthermore, functional connectivity between the ventral tegmental area and ventrolateral prefrontal cortex, an area involved in regulation, was specifically increased during emotion regulation and likewise related to learning performance. Our data, therefore, provide first-time evidence that beyond reducing emotional responses, cognitive emotion regulation affects learning by enhancing prediction error-related activity, potentially via tegmental dopaminergic pathways. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Global CO2 flux inversions from remote-sensing data with systematic errors using hierarchical statistical models

    Science.gov (United States)

    Zammit-Mangion, Andrew; Stavert, Ann; Rigby, Matthew; Ganesan, Anita; Rayner, Peter; Cressie, Noel

    2017-04-01

    The Orbiting Carbon Observatory-2 (OCO-2) satellite was launched on 2 July 2014, and it has been a source of atmospheric CO2 data since September 2014. The OCO-2 dataset contains a number of variables, but the one of most interest for flux inversion has been the column-averaged dry-air mole fraction (in units of ppm). These global level-2 data offer the possibility of inferring CO2 fluxes at Earth's surface and tracking those fluxes over time. However, as well as having a component of random error, the OCO-2 data have a component of systematic error that is dependent on the instrument's mode, namely land nadir, land glint, and ocean glint. Our statistical approach to CO2-flux inversion starts with constructing a statistical model for the random and systematic errors with parameters that can be estimated from the OCO-2 data and possibly in situ sources from flasks, towers, and the Total Column Carbon Observing Network (TCCON). Dimension reduction of the flux field is achieved through the use of physical basis functions, while temporal evolution of the flux is captured by modelling the basis-function coefficients as a vector autoregressive process. For computational efficiency, flux inversion uses only three months of sensitivities of mole fraction to changes in flux, computed using MOZART; any residual variation is captured through the modelling of a stochastic process that varies smoothly as a function of latitude. The second stage of our statistical approach is to simulate from the posterior distribution of the basis-function coefficients and all unknown parameters given the data using a fully Bayesian Markov chain Monte Carlo (MCMC) algorithm. Estimates and posterior variances of the flux field can then be obtained straightforwardly from this distribution. Our statistical approach is different than others, as it simultaneously makes inference (and quantifies uncertainty) on both the error components' parameters and the CO2 fluxes. We compare it to more classical

  14. Experimental Evaluation of a Mixed Controller That Amplifies Spatial Errors and Reduces Timing Errors

    Directory of Open Access Journals (Sweden)

    Laura Marchal-Crespo

    2017-06-01

    Full Text Available Research on motor learning suggests that training with haptic guidance enhances learning of the timing components of motor tasks, whereas error amplification is better for learning the spatial components. We present a novel mixed guidance controller that combines haptic guidance and error amplification to simultaneously promote learning of the timing and spatial components of complex motor tasks. The controller is realized using a force field around the desired position. This force field has a stable manifold tangential to the trajectory that guides subjects in velocity-related aspects. The force field has an unstable manifold perpendicular to the trajectory, which amplifies the perpendicular (spatial error. We also designed a controller that applies randomly varying, unpredictable disturbing forces to enhance the subjects’ active participation by pushing them away from their “comfort zone.” We conducted an experiment with thirty-two healthy subjects to evaluate the impact of four different training strategies on motor skill learning and self-reported motivation: (i No haptics, (ii mixed guidance, (iii perpendicular error amplification and tangential haptic guidance provided in sequential order, and (iv randomly varying disturbing forces. Subjects trained two motor tasks using ARMin IV, a robotic exoskeleton for upper limb rehabilitation: follow circles with an ellipsoidal speed profile, and move along a 3D line following a complex speed profile. Mixed guidance showed no detectable learning advantages over the other groups. Results suggest that the effectiveness of the training strategies depends on the subjects’ initial skill level. Mixed guidance seemed to benefit subjects who performed the circle task with smaller errors during baseline (i.e., initially more skilled subjects, while training with no haptics was more beneficial for subjects who created larger errors (i.e., less skilled subjects. Therefore, perhaps the high functional

  15. Assessing systematic errors in GOSAT CO2 retrievals by comparing assimilated fields to independent CO2 data

    Science.gov (United States)

    Baker, D. F.; Oda, T.; O'Dell, C.; Wunch, D.; Jacobson, A. R.; Yoshida, Y.; Partners, T.

    2012-12-01

    Measurements of column CO2 concentration from space are now being taken at a spatial and temporal density that permits regional CO2 sources and sinks to be estimated. Systematic errors in the satellite retrievals must be minimized for these estimates to be useful, however. CO2 retrievals from the TANSO instrument aboard the GOSAT satellite are compared to similar column retrievals from the Total Carbon Column Observing Network (TCCON) as the primary method of validation; while this is a powerful approach, it can only be done for overflights of 10-20 locations and has not, for example, permitted validation of GOSAT data over the oceans or deserts. Here we present a complementary approach that uses a global atmospheric transport model and flux inversion method to compare different types of CO2 measurements (GOSAT, TCCON, surface in situ, and aircraft) at different locations, at the cost of added transport error. The measurements from any single type of data are used in a variational carbon data assimilation method to optimize surface CO2 fluxes (with a CarbonTracker prior), then the corresponding optimized CO2 concentration fields are compared to those data types not inverted, using the appropriate vertical weighting. With this approach, we find that GOSAT column CO2 retrievals from the ACOS project (version 2.9 and 2.10) contain systematic errors that make the modeled fit to the independent data worse. However, we find that the differences between the GOSAT data and our prior model are correlated with certain physical variables (aerosol amount, surface albedo, correction to total column mass) that are likely driving errors in the retrievals, independent of CO2 concentration. If we correct the GOSAT data using a fit to these variables, then we find the GOSAT data to improve the fit to independent CO2 data, which suggests that the useful information in the measurements outweighs the negative impact of the remaining systematic errors. With this assurance, we compare

  16. Effectiveness of Structured Psychodrama and Systematic Desensitization in Reducing Test Anxiety.

    Science.gov (United States)

    Kipper, David A.; Giladi, Daniel

    1978-01-01

    Students with examination anxiety took part in study of effectiveness of two kinds of treatment, structured psychodrama and systematic desensitization, in reducing test anxiety. Results showed that subjects in both treatment groups significantly reduced test-anxiety scores. Structured psychodrama is as effective as systematic desensitization in…

  17. Evaluation of positioning errors of the patient using cone beam CT megavoltage

    International Nuclear Information System (INIS)

    Garcia Ruiz-Zorrilla, J.; Fernandez Leton, J. P.; Zucca Aparicio, D.; Perez Moreno, J. M.; Minambres Moro, A.

    2013-01-01

    Image-guided radiation therapy allows you to assess and fix the positioning of the patient in the treatment unit, thus reducing the uncertainties due to the positioning of the patient. This work assesses errors systematic and errors of randomness from the corrections made to a series of patients of different diseases through a protocol off line of cone beam CT (CBCT) megavoltage. (Author)

  18. Interventions to Reduce Adult Nursing Turnover: A Systematic Review of Systematic Reviews.

    Science.gov (United States)

    Halter, Mary; Pelone, Ferruccio; Boiko, Olga; Beighton, Carole; Harris, Ruth; Gale, Julia; Gourlay, Stephen; Drennan, Vari

    2017-01-01

    Nurse turnover is an issue of concern in health care systems internationally. Understanding which interventions are effective to reduce turnover rates is important to managers and health care organisations. Despite a plethora of reviews of such interventions, strength of evidence is hard to determine. We aimed to review literature on interventions to reduce turnover in nurses working in the adult health care services in developed economies. We conducted an overview (systematic review of systematic reviews) using the Cochrane Database of Systematic Reviews, MEDLINE, EMBASE, Applied Social Sciences Index and Abstracts, CINAHL plus and SCOPUS and forward searching. We included reviews published between 1990 and January 2015 in English. We carried out parallel blinded selection, extraction of data and assessment of bias, using the Assessment of Multiple Systematic Reviews. We carried out a narrative synthesis. Despite the large body of published reviews, only seven reviews met the inclusion criteria. These provide moderate quality review evidence, albeit from poorly controlled primary studies. They provide evidence of effect of a small number of interventions which decrease turnover or increase retention of nurses, these being preceptorship of new graduates and leadership for group cohesion. We highlight that a large body of reviews does not equate with a large body of high quality evidence. Agreement as to the measures and terminology to be used together with well-designed, funded primary research to provide robust evidence for nurse and human resource managers to base their nurse retention strategies on is urgently required.

  19. Error Analysis of Indirect Broadband Monitoring of Multilayer Optical Coatings using Computer Simulations

    Science.gov (United States)

    Semenov, Z. V.; Labusov, V. A.

    2017-11-01

    Results of studying the errors of indirect monitoring by means of computer simulations are reported. The monitoring method is based on measuring spectra of reflection from additional monitoring substrates in a wide spectral range. Special software (Deposition Control Simulator) is developed, which allows one to estimate the influence of the monitoring system parameters (noise of the photodetector array, operating spectral range of the spectrometer and errors of its calibration in terms of wavelengths, drift of the radiation source intensity, and errors in the refractive index of deposited materials) on the random and systematic errors of deposited layer thickness measurements. The direct and inverse problems of multilayer coatings are solved using the OptiReOpt library. Curves of the random and systematic errors of measurements of the deposited layer thickness as functions of the layer thickness are presented for various values of the system parameters. Recommendations are given on using the indirect monitoring method for the purpose of reducing the layer thickness measurement error.

  20. Heuristics and Cognitive Error in Medical Imaging.

    Science.gov (United States)

    Itri, Jason N; Patel, Sohil H

    2018-05-01

    The field of cognitive science has provided important insights into mental processes underlying the interpretation of imaging examinations. Despite these insights, diagnostic error remains a major obstacle in the goal to improve quality in radiology. In this article, we describe several types of cognitive bias that lead to diagnostic errors in imaging and discuss approaches to mitigate cognitive biases and diagnostic error. Radiologists rely on heuristic principles to reduce complex tasks of assessing probabilities and predicting values into simpler judgmental operations. These mental shortcuts allow rapid problem solving based on assumptions and past experiences. Heuristics used in the interpretation of imaging studies are generally helpful but can sometimes result in cognitive biases that lead to significant errors. An understanding of the causes of cognitive biases can lead to the development of educational content and systematic improvements that mitigate errors and improve the quality of care provided by radiologists.

  1. Unaccounted source of systematic errors in measurements of the Newtonian gravitational constant G

    International Nuclear Information System (INIS)

    DeSalvo, Riccardo

    2015-01-01

    Many precision measurements of G have produced a spread of results incompatible with measurement errors. Clearly an unknown source of systematic errors is at work. It is proposed here that most of the discrepancies derive from subtle deviations from Hooke's law, caused by avalanches of entangled dislocations. The idea is supported by deviations from linearity reported by experimenters measuring G, similarly to what is observed, on a larger scale, in low-frequency spring oscillators. Some mitigating experimental apparatus modifications are suggested. - Highlights: • Source of discrepancies on universal gravitational constant G measurements. • Collective motion of dislocations results in breakdown of Hook's law. • Self-organized criticality produce non-predictive shifts of equilibrium point. • New dissipation mechanism different from loss angle and viscous models is necessary. • Mitigation measures proposed may bring coherence to the measurements of G

  2. Error management process for power stations

    International Nuclear Information System (INIS)

    Hirotsu, Yuko; Takeda, Daisuke; Fujimoto, Junzo; Nagasaka, Akihiko

    2016-01-01

    The purpose of this study is to establish 'error management process for power stations' for systematizing activities for human error prevention and for festering continuous improvement of these activities. The following are proposed by deriving concepts concerning error management process from existing knowledge and realizing them through application and evaluation of their effectiveness at a power station: an entire picture of error management process that facilitate four functions requisite for maraging human error prevention effectively (1. systematizing human error prevention tools, 2. identifying problems based on incident reports and taking corrective actions, 3. identifying good practices and potential problems for taking proactive measures, 4. prioritizeng human error prevention tools based on identified problems); detail steps for each activity (i.e. developing an annual plan for human error prevention, reporting and analyzing incidents and near misses) based on a model of human error causation; procedures and example of items for identifying gaps between current and desired levels of executions and outputs of each activity; stages for introducing and establishing the above proposed error management process into a power station. By giving shape to above proposals at a power station, systematization and continuous improvement of activities for human error prevention in line with the actual situation of the power station can be expected. (author)

  3. Comparison of two stochastic techniques for reliable urban runoff prediction by modeling systematic errors

    DEFF Research Database (Denmark)

    Del Giudice, Dario; Löwe, Roland; Madsen, Henrik

    2015-01-01

    from different fields and have not yet been compared in environmental modeling. To compare the two approaches, we develop a unifying terminology, evaluate them theoretically, and apply them to conceptual rainfall-runoff modeling in the same drainage system. Our results show that both approaches can......In urban rainfall-runoff, commonly applied statistical techniques for uncertainty quantification mostly ignore systematic output errors originating from simplified models and erroneous inputs. Consequently, the resulting predictive uncertainty is often unreliable. Our objective is to present two...... approaches which use stochastic processes to describe systematic deviations and to discuss their advantages and drawbacks for urban drainage modeling. The two methodologies are an external bias description (EBD) and an internal noise description (IND, also known as stochastic gray-box modeling). They emerge...

  4. Does the GPM mission improve the systematic error component in satellite rainfall estimates over TRMM? An evaluation at a pan-India scale

    Science.gov (United States)

    Beria, Harsh; Nanda, Trushnamayee; Singh Bisht, Deepak; Chatterjee, Chandranath

    2017-12-01

    The last couple of decades have seen the outburst of a number of satellite-based precipitation products with Tropical Rainfall Measuring Mission (TRMM) as the most widely used for hydrologic applications. Transition of TRMM into the Global Precipitation Measurement (GPM) promises enhanced spatio-temporal resolution along with upgrades to sensors and rainfall estimation techniques. The dependence of systematic error components in rainfall estimates of the Integrated Multi-satellitE Retrievals for GPM (IMERG), and their variation with climatology and topography, was evaluated over 86 basins in India for year 2014 and compared with the corresponding (2014) and retrospective (1998-2013) TRMM estimates. IMERG outperformed TRMM for all rainfall intensities across a majority of Indian basins, with significant improvement in low rainfall estimates showing smaller negative biases in 75 out of 86 basins. Low rainfall estimates in TRMM showed a systematic dependence on basin climatology, with significant overprediction in semi-arid basins, which gradually improved in the higher rainfall basins. Medium and high rainfall estimates of TRMM exhibited a strong dependence on basin topography, with declining skill in higher elevation basins. The systematic dependence of error components on basin climatology and topography was reduced in IMERG, especially in terms of topography. Rainfall-runoff modeling using the Variable Infiltration Capacity (VIC) model over two flood-prone basins (Mahanadi and Wainganga) revealed that improvement in rainfall estimates in IMERG did not translate into improvement in runoff simulations. More studies are required over basins in different hydroclimatic zones to evaluate the hydrologic significance of IMERG.

  5. Does the GPM mission improve the systematic error component in satellite rainfall estimates over TRMM? An evaluation at a pan-India scale

    Directory of Open Access Journals (Sweden)

    H. Beria

    2017-12-01

    Full Text Available The last couple of decades have seen the outburst of a number of satellite-based precipitation products with Tropical Rainfall Measuring Mission (TRMM as the most widely used for hydrologic applications. Transition of TRMM into the Global Precipitation Measurement (GPM promises enhanced spatio-temporal resolution along with upgrades to sensors and rainfall estimation techniques. The dependence of systematic error components in rainfall estimates of the Integrated Multi-satellitE Retrievals for GPM (IMERG, and their variation with climatology and topography, was evaluated over 86 basins in India for year 2014 and compared with the corresponding (2014 and retrospective (1998–2013 TRMM estimates. IMERG outperformed TRMM for all rainfall intensities across a majority of Indian basins, with significant improvement in low rainfall estimates showing smaller negative biases in 75 out of 86 basins. Low rainfall estimates in TRMM showed a systematic dependence on basin climatology, with significant overprediction in semi-arid basins, which gradually improved in the higher rainfall basins. Medium and high rainfall estimates of TRMM exhibited a strong dependence on basin topography, with declining skill in higher elevation basins. The systematic dependence of error components on basin climatology and topography was reduced in IMERG, especially in terms of topography. Rainfall-runoff modeling using the Variable Infiltration Capacity (VIC model over two flood-prone basins (Mahanadi and Wainganga revealed that improvement in rainfall estimates in IMERG did not translate into improvement in runoff simulations. More studies are required over basins in different hydroclimatic zones to evaluate the hydrologic significance of IMERG.

  6. A continuous quality improvement project to reduce medication error in the emergency department.

    Science.gov (United States)

    Lee, Sara Bc; Lee, Larry Ly; Yeung, Richard Sd; Chan, Jimmy Ts

    2013-01-01

    Medication errors are a common source of adverse healthcare incidents particularly in the emergency department (ED) that has a number of factors that make it prone to medication errors. This project aims to reduce medication errors and improve the health and economic outcomes of clinical care in Hong Kong ED. In 2009, a task group was formed to identify problems that potentially endanger medication safety and developed strategies to eliminate these problems. Responsible officers were assigned to look after seven error-prone areas. Strategies were proposed, discussed, endorsed and promulgated to eliminate the problems identified. A reduction of medication incidents (MI) from 16 to 6 was achieved before and after the improvement work. This project successfully established a concrete organizational structure to safeguard error-prone areas of medication safety in a sustainable manner.

  7. The effect on dose accumulation accuracy of inverse-consistency and transitivity error reduced deformation maps

    International Nuclear Information System (INIS)

    Hardcastle, Nicholas; Bender, Edward T.; Tomé, Wolfgang A.

    2014-01-01

    It has previously been shown that deformable image registrations (DIRs) often result in deformation maps that are neither inverse-consistent nor transitive, and that the dose accumulation based on these deformation maps can be inconsistent if different image pathways are used for dose accumulation. A method presented to reduce inverse consistency and transitivity errors has been shown to result in more consistent dose accumulation, regardless of the image pathway selected for dose accumulation. The present study investigates the effect on the dose accumulation accuracy of deformation maps processed to reduce inverse consistency and transitivity errors. A set of lung 4DCT phases were analysed, consisting of four images on which a dose grid was created. Dose to 75 corresponding anatomical locations was manually tracked. Dose accumulation was performed between all image sets with Demons derived deformation maps as well as deformation maps processed to reduce inverse consistency and transitivity errors. The ground truth accumulated dose was then compared with the accumulated dose derived from DIR. Two dose accumulation image pathways were considered. The post-processing method to reduce inverse consistency and transitivity errors had minimal effect on the dose accumulation accuracy. There was a statistically significant improvement in dose accumulation accuracy for one pathway, but for the other pathway there was no statistically significant difference. A post-processing technique to reduce inverse consistency and transitivity errors has a positive, yet minimal effect on the dose accumulation accuracy. Thus the post-processing technique improves consistency of dose accumulation with minimal effect on dose accumulation accuracy.

  8. Avoiding a Systematic Error in Assessing Fat Graft Survival in the Breast with Repeated Magnetic Resonance Imaging

    DEFF Research Database (Denmark)

    Glovinski, Peter Viktor; Herly, Mikkel; Müller, Felix C

    2016-01-01

    Several techniques for measuring breast volume (BV) are based on examining the breast on magnetic resonance imaging. However, when techniques designed to measure total BV are used to quantify BV changes, for example, after fat grafting, a systematic error is introduced because BV changes lead to ...

  9. An improved approach to reduce partial volume errors in brain SPET

    International Nuclear Information System (INIS)

    Hatton, R.L.; Hatton, B.F.; Michael, G.; Barnden, L.; QUT, Brisbane, QLD; The Queen Elizabeth Hospital, Adelaide, SA

    1999-01-01

    Full text: Limitations in SPET resolution give rise to significant partial volume error (PVE) in small brain structures We have investigated a previously published method (Muller-Gartner et al., J Cereb Blood Flow Metab 1992;16: 650-658) to correct PVE in grey matter using MRI. An MRI is registered and segmented to obtain a grey matter tissue volume which is then smoothed to obtain resolution matched to the corresponding SPET. By dividing the original SPET with this correction map, structures can be corrected for PVE on a pixel-by-pixel basis. Since this approach is limited by space-invariant filtering, modification was made by estimating projections for the segmented MRI and reconstructing these using identical parameters to SPET. The methods were tested on simulated brain scans, reconstructed with the ordered subsets EM algorithm (8,16, 32, 64 equivalent EM iterations) The new method provided better recovery visually. For 32 EM iterations, recovery coefficients were calculated for grey matter regions. The effects of potential errors in the method were examined. Mean recovery was unchanged with one pixel registration error, the maximum error found in most registration programs. Errors in segmentation > 2 pixels results in loss of accuracy for small structures. The method promises to be useful for reducing PVE in brain SPET

  10. Causes of medication administration errors in hospitals: a systematic review of quantitative and qualitative evidence.

    Science.gov (United States)

    Keers, Richard N; Williams, Steven D; Cooke, Jonathan; Ashcroft, Darren M

    2013-11-01

    Underlying systems factors have been seen to be crucial contributors to the occurrence of medication errors. By understanding the causes of these errors, the most appropriate interventions can be designed and implemented to minimise their occurrence. This study aimed to systematically review and appraise empirical evidence relating to the causes of medication administration errors (MAEs) in hospital settings. Nine electronic databases (MEDLINE, EMBASE, International Pharmaceutical Abstracts, ASSIA, PsycINFO, British Nursing Index, CINAHL, Health Management Information Consortium and Social Science Citations Index) were searched between 1985 and May 2013. Inclusion and exclusion criteria were applied to identify eligible publications through title analysis followed by abstract and then full text examination. English language publications reporting empirical data on causes of MAEs were included. Reference lists of included articles and relevant review papers were hand searched for additional studies. Studies were excluded if they did not report data on specific MAEs, used accounts from individuals not directly involved in the MAE concerned or were presented as conference abstracts with insufficient detail. A total of 54 unique studies were included. Causes of MAEs were categorised according to Reason's model of accident causation. Studies were assessed to determine relevance to the research question and how likely the results were to reflect the potential underlying causes of MAEs based on the method(s) used. Slips and lapses were the most commonly reported unsafe acts, followed by knowledge-based mistakes and deliberate violations. Error-provoking conditions influencing administration errors included inadequate written communication (prescriptions, documentation, transcription), problems with medicines supply and storage (pharmacy dispensing errors and ward stock management), high perceived workload, problems with ward-based equipment (access, functionality

  11. Reducing Error, Fraud and Corruption (EFC) in Social Protection Programs

    OpenAIRE

    Tesliuc, Emil Daniel; Milazzo, Annamaria

    2007-01-01

    Social Protection (SP) and Social Safety Net (SSN) programs channel a large amount of public resources, it is important to make sure that these reach the intended beneficiaries. Error, fraud, or corruption (EFC) reduces the economic efficiency of these interventions by decreasing the amount of money that goes to the intended beneficiaries, and erodes the political support for the program. ...

  12. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  13. Increased errors and decreased performance at night: A systematic review of the evidence concerning shift work and quality.

    Science.gov (United States)

    de Cordova, Pamela B; Bradford, Michelle A; Stone, Patricia W

    2016-02-15

    Shift workers have worse health outcomes than employees who work standard business hours. However, it is unclear how this poorer health shift may be related to employee work productivity. The purpose of this systematic review is to assess the relationship between shift work and errors and performance. Searches of MEDLINE/PubMed, EBSCOhost, and CINAHL were conducted to identify articles that examined the relationship between shift work, errors, quality, productivity, and performance. All articles were assessed for study quality. A total of 435 abstracts were screened with 13 meeting inclusion criteria. Eight studies were rated to be of strong, methodological quality. Nine studies demonstrated a positive relationship that night shift workers committed more errors and had decreased performance. Night shift workers have worse health that may contribute to errors and decreased performance in the workplace.

  14. The possible benefits of reduced errors in the motor skills acquisition of children

    Directory of Open Access Journals (Sweden)

    Capio Catherine M

    2012-01-01

    Full Text Available Abstract An implicit approach to motor learning suggests that relatively complex movement skills may be better acquired in environments that constrain errors during the initial stages of practice. This current concept paper proposes that reducing the number of errors committed during motor learning leads to stable performance when attention demands are increased by concurrent cognitive tasks. While it appears that this approach to practice may be beneficial for motor learning, further studies are needed to both confirm this advantage and better understand the underlying mechanisms. An approach involving error minimization during early learning may have important applications in paediatric rehabilitation.

  15. Effect of MLC leaf position, collimator rotation angle, and gantry rotation angle errors on intensity-modulated radiotherapy plans for nasopharyngeal carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Bai, Sen; Li, Guangjun; Wang, Maojie; Jiang, Qinfeng; Zhang, Yingjie [State Key Laboratory of Biotherapy and Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan (China); Wei, Yuquan, E-mail: yuquawei@vip.sina.com [State Key Laboratory of Biotherapy and Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan (China)

    2013-07-01

    The purpose of this study was to investigate the effect of multileaf collimator (MLC) leaf position, collimator rotation angle, and accelerator gantry rotation angle errors on intensity-modulated radiotherapy plans for nasopharyngeal carcinoma. To compare dosimetric differences between the simulating plans and the clinical plans with evaluation parameters, 6 patients with nasopharyngeal carcinoma were selected for simulation of systematic and random MLC leaf position errors, collimator rotation angle errors, and accelerator gantry rotation angle errors. There was a high sensitivity to dose distribution for systematic MLC leaf position errors in response to field size. When the systematic MLC position errors were 0.5, 1, and 2 mm, respectively, the maximum values of the mean dose deviation, observed in parotid glands, were 4.63%, 8.69%, and 18.32%, respectively. The dosimetric effect was comparatively small for systematic MLC shift errors. For random MLC errors up to 2 mm and collimator and gantry rotation angle errors up to 0.5°, the dosimetric effect was negligible. We suggest that quality control be regularly conducted for MLC leaves, so as to ensure that systematic MLC leaf position errors are within 0.5 mm. Because the dosimetric effect of 0.5° collimator and gantry rotation angle errors is negligible, it can be concluded that setting a proper threshold for allowed errors of collimator and gantry rotation angle may increase treatment efficacy and reduce treatment time.

  16. Novel error propagation approach for reducing H2S/O2 reaction mechanism

    International Nuclear Information System (INIS)

    Selim, H.; Gupta, A.K.; Sassi, M.

    2012-01-01

    A reduction strategy of hydrogen sulfide/oxygen reaction mechanism is conducted to simplify the detailed mechanism. Direct relation graph and error propagation methodology (DRGEP) has been used. A novel approach of direct elementary reaction error (DERE) has been developed in this study. The developed approach allowed for further reduction of the reaction mechanism. The reduced mechanism has been compared with the detailed mechanism under different conditions to emphasize its validity. The results obtained from the resulting reduced mechanism showed good agreement with that from the detailed mechanism. However, some discrepancies have been found for some species. Hydrogen and oxygen mole fractions showed the largest discrepancy of all combustion products. The reduced mechanism was also found to be capable of tracking the changes that occur in chemical kinetics through the change in reaction conditions. A comparison on the ignition delay time obtained from the reduced mechanism and previous experimental data showed good agreement. The reduced mechanism was used to track changes in mechanistic pathways of Claus reactions with the reaction progress.

  17. Reducing WCET Overestimations by Correcting Errors in Loop Bound Constraints

    Directory of Open Access Journals (Sweden)

    Fanqi Meng

    2017-12-01

    Full Text Available In order to reduce overestimations of worst-case execution time (WCET, in this article, we firstly report a kind of specific WCET overestimation caused by non-orthogonal nested loops. Then, we propose a novel correction approach which has three basic steps. The first step is to locate the worst-case execution path (WCEP in the control flow graph and then map it onto source code. The second step is to identify non-orthogonal nested loops from the WCEP by means of an abstract syntax tree. The last step is to recursively calculate the WCET errors caused by the loose loop bound constraints, and then subtract the total errors from the overestimations. The novelty lies in the fact that the WCET correction is only conducted on the non-branching part of WCEP, thus avoiding potential safety risks caused by possible WCEP switches. Experimental results show that our approach reduces the specific WCET overestimation by an average of more than 82%, and 100% of corrected WCET is no less than the actual WCET. Thus, our approach is not only effective but also safe. It will help developers to design energy-efficient and safe real-time systems.

  18. Filtering Methods for Error Reduction in Spacecraft Attitude Estimation Using Quaternion Star Trackers

    Science.gov (United States)

    Calhoun, Philip C.; Sedlak, Joseph E.; Superfin, Emil

    2011-01-01

    Precision attitude determination for recent and planned space missions typically includes quaternion star trackers (ST) and a three-axis inertial reference unit (IRU). Sensor selection is based on estimates of knowledge accuracy attainable from a Kalman filter (KF), which provides the optimal solution for the case of linear dynamics with measurement and process errors characterized by random Gaussian noise with white spectrum. Non-Gaussian systematic errors in quaternion STs are often quite large and have an unpredictable time-varying nature, particularly when used in non-inertial pointing applications. Two filtering methods are proposed to reduce the attitude estimation error resulting from ST systematic errors, 1) extended Kalman filter (EKF) augmented with Markov states, 2) Unscented Kalman filter (UKF) with a periodic measurement model. Realistic assessments of the attitude estimation performance gains are demonstrated with both simulation and flight telemetry data from the Lunar Reconnaissance Orbiter.

  19. On the Correspondence between Mean Forecast Errors and Climate Errors in CMIP5 Models

    Energy Technology Data Exchange (ETDEWEB)

    Ma, H. -Y.; Xie, S.; Klein, S. A.; Williams, K. D.; Boyle, J. S.; Bony, S.; Douville, H.; Fermepin, S.; Medeiros, B.; Tyteca, S.; Watanabe, M.; Williamson, D.

    2014-02-01

    The present study examines the correspondence between short- and long-term systematic errors in five atmospheric models by comparing the 16 five-day hindcast ensembles from the Transpose Atmospheric Model Intercomparison Project II (Transpose-AMIP II) for July–August 2009 (short term) to the climate simulations from phase 5 of the Coupled Model Intercomparison Project (CMIP5) and AMIP for the June–August mean conditions of the years of 1979–2008 (long term). Because the short-term hindcasts were conducted with identical climate models used in the CMIP5/AMIP simulations, one can diagnose over what time scale systematic errors in these climate simulations develop, thus yielding insights into their origin through a seamless modeling approach. The analysis suggests that most systematic errors of precipitation, clouds, and radiation processes in the long-term climate runs are present by day 5 in ensemble average hindcasts in all models. Errors typically saturate after few days of hindcasts with amplitudes comparable to the climate errors, and the impacts of initial conditions on the simulated ensemble mean errors are relatively small. This robust bias correspondence suggests that these systematic errors across different models likely are initiated by model parameterizations since the atmospheric large-scale states remain close to observations in the first 2–3 days. However, biases associated with model physics can have impacts on the large-scale states by day 5, such as zonal winds, 2-m temperature, and sea level pressure, and the analysis further indicates a good correspondence between short- and long-term biases for these large-scale states. Therefore, improving individual model parameterizations in the hindcast mode could lead to the improvement of most climate models in simulating their climate mean state and potentially their future projections.

  20. Dosimetric impact of systematic MLC positional errors on step and shoot IMRT for prostate cancer: a planning study

    International Nuclear Information System (INIS)

    Ung, N.M.; Harper, C.S.; Wee, L.

    2011-01-01

    Full text: The positional accuracy of multileaf collimators (MLC) is crucial in ensuring precise delivery of intensity-modulated radiotherapy (IMRT). The aim of this planning study was to investigate the dosimetric impact of systematic MLC positional errors on step and shoot IMRT of prostate cancer. A total of 12 perturbations of MLC leaf banks were introduced to six prostate IMRT treatment plans to simulate MLC systematic positional errors. Dose volume histograms (DVHs) were generated for the extraction of dose endpoint parameters. Plans were evaluated in terms of changes to the defined endpoint dose parameters, conformity index (CI) and healthy tissue avoidance (HTA) to planning target volume (PTV), rectum and bladder. Negative perturbations of MLC had been found to produce greater changes to endpoint dose parameters than positive perturbations of MLC (p 9 5 of -1.2 and 0.9% respectively. Negative and positive synchronised MLC perturbations of I mm in one direction resulted in median changes in D 9 5 of -2.3 and 1.8% respectively. Doses to rectum were generally more sensitive to systematic MLC en-ors compared to bladder (p < 0.01). Negative and positive synchronised MLC perturbations of I mm in one direction resulted in median changes in endpoint dose parameters of rectum and bladder from 1.0 to 2.5%. Maximum reduction of -4.4 and -7.3% were recorded for conformity index (CI) and healthy tissue avoidance (HT A) respectively due to synchronised MLC perturbation of 1 mm. MLC errors resulted in dosimetric changes in IMRT plans for prostate. (author)

  1. Nature versus nurture: A systematic approach to elucidate gene-environment interactions in the development of myopic refractive errors.

    Science.gov (United States)

    Miraldi Utz, Virginia

    2017-01-01

    Myopia is the most common eye disorder and major cause of visual impairment worldwide. As the incidence of myopia continues to rise, the need to further understand the complex roles of molecular and environmental factors controlling variation in refractive error is of increasing importance. Tkatchenko and colleagues applied a systematic approach using a combination of gene set enrichment analysis, genome-wide association studies, and functional analysis of a murine model to identify a myopia susceptibility gene, APLP2. Differential expression of refractive error was associated with time spent reading for those with low frequency variants in this gene. This provides support for the longstanding hypothesis of gene-environment interactions in refractive error development.

  2. Identifying systematic DFT errors in catalytic reactions

    DEFF Research Database (Denmark)

    Christensen, Rune; Hansen, Heine Anton; Vegge, Tejs

    2015-01-01

    Using CO2 reduction reactions as examples, we present a widely applicable method for identifying the main source of errors in density functional theory (DFT) calculations. The method has broad applications for error correction in DFT calculations in general, as it relies on the dependence...... of the applied exchange–correlation functional on the reaction energies rather than on errors versus the experimental data. As a result, improved energy corrections can now be determined for both gas phase and adsorbed reaction species, particularly interesting within heterogeneous catalysis. We show...... that for the CO2 reduction reactions, the main source of error is associated with the C[double bond, length as m-dash]O bonds and not the typically energy corrected OCO backbone....

  3. Measurement Error in Education and Growth Regressions

    NARCIS (Netherlands)

    Portela, M.; Teulings, C.N.; Alessie, R.

    The perpetual inventory method used for the construction of education data per country leads to systematic measurement error. This paper analyses the effect of this measurement error on GDP regressions. There is a systematic difference in the education level between census data and observations

  4. Measurement error in education and growth regressions

    NARCIS (Netherlands)

    Portela, Miguel; Teulings, Coen; Alessie, R.

    2004-01-01

    The perpetual inventory method used for the construction of education data per country leads to systematic measurement error. This paper analyses the effect of this measurement error on GDP regressions. There is a systematic difference in the education level between census data and observations

  5. Coping with medical error: a systematic review of papers to assess the effects of involvement in medical errors on healthcare professionals' psychological well-being.

    Science.gov (United States)

    Sirriyeh, Reema; Lawton, Rebecca; Gardner, Peter; Armitage, Gerry

    2010-12-01

    Previous research has established health professionals as secondary victims of medical error, with the identification of a range of emotional and psychological repercussions that may occur as a result of involvement in error.2 3 Due to the vast range of emotional and psychological outcomes, research to date has been inconsistent in the variables measured and tools used. Therefore, differing conclusions have been drawn as to the nature of the impact of error on professionals and the subsequent repercussions for their team, patients and healthcare institution. A systematic review was conducted. Data sources were identified using database searches, with additional reference and hand searching. Eligibility criteria were applied to all studies identified, resulting in a total of 24 included studies. Quality assessment was conducted with the included studies using a tool that was developed as part of this research, but due to the limited number and diverse nature of studies, no exclusions were made on this basis. Review findings suggest that there is consistent evidence for the widespread impact of medical error on health professionals. Psychological repercussions may include negative states such as shame, self-doubt, anxiety and guilt. Despite much attention devoted to the assessment of negative outcomes, the potential for positive outcomes resulting from error also became apparent, with increased assertiveness, confidence and improved colleague relationships reported. It is evident that involvement in a medical error can elicit a significant psychological response from the health professional involved. However, a lack of literature around coping and support, coupled with inconsistencies and weaknesses in methodology, may need be addressed in future work.

  6. Economic impact of medication error: a systematic review.

    Science.gov (United States)

    Walsh, Elaine K; Hansen, Christina Raae; Sahm, Laura J; Kearney, Patricia M; Doherty, Edel; Bradley, Colin P

    2017-05-01

    Medication error is a significant source of morbidity and mortality among patients. Clinical and cost-effectiveness evidence are required for the implementation of quality of care interventions. Reduction of error-related cost is a key potential benefit of interventions addressing medication error. The aim of this review was to describe and quantify the economic burden associated with medication error. PubMed, Cochrane, Embase, CINAHL, EconLit, ABI/INFORM, Business Source Complete were searched. Studies published 2004-2016 assessing the economic impact of medication error were included. Cost values were expressed in Euro 2015. A narrative synthesis was performed. A total of 4572 articles were identified from database searching, and 16 were included in the review. One study met all applicable quality criteria. Fifteen studies expressed economic impact in monetary terms. Mean cost per error per study ranged from €2.58 to €111 727.08. Healthcare costs were used to measure economic impact in 15 of the included studies with one study measuring litigation costs. Four studies included costs incurred in primary care with the remaining 12 measuring hospital costs. Five studies looked at general medication error in a general population with 11 studies reporting the economic impact of an individual type of medication error or error within a specific patient population. Considerable variability existed between studies in terms of financial cost, patients, settings and errors included. Many were of poor quality. Assessment of economic impact was conducted predominantly in the hospital setting with little assessment of primary care impact. Limited parameters were used to establish economic impact. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  7. IceCube systematic errors investigation: Simulation of the ice

    Energy Technology Data Exchange (ETDEWEB)

    Resconi, Elisa; Wolf, Martin [Max-Planck-Institute for Nuclear Physics, Heidelberg (Germany); Schukraft, Anne [RWTH, Aachen University (Germany)

    2010-07-01

    IceCube is a neutrino observatory for astroparticle and astronomy research at the South Pole. It uses one cubic kilometer of Antartica's deepest ice (1500 m-2500 m in depth) to detect Cherenkov light, generated by charged particles traveling through the ice, with an array of phototubes encapsulated in glass pressure spheres. The arrival time as well as the charge deposited of the detected photons represent the base measurements that are used for track and energy reconstruction of those charged particles. The optical properties of the deep antarctic ice vary from layer to layer. Measurements of the ice properties and their correct modeling in Monte Carlo simulation is then of primary importance for the correct understanding of the IceCube telescope behavior. After a short summary about the different methods to investigate the ice properties and to calibrate the detector, we show how the simulation obtained by using this information compares to the measured data and how systematic errors due to uncertain ice properties are determined in IceCube.

  8. [Errors in Peruvian medical journals references].

    Science.gov (United States)

    Huamaní, Charles; Pacheco-Romero, José

    2009-01-01

    References are fundamental in our studies; an adequate selection is asimportant as an adequate description. To determine the number of errors in a sample of references found in Peruvian medical journals. We reviewed 515 scientific papers references selected by systematic randomized sampling and corroborated reference information with the original document or its citation in Pubmed, LILACS or SciELO-Peru. We found errors in 47,6% (245) of the references, identifying 372 types of errors; the most frequent were errors in presentation style (120), authorship (100) and title (100), mainly due to spelling mistakes (91). References error percentage was high, varied and multiple. We suggest systematic revision of references in the editorial process as well as to extend the discussion on this theme. references, periodicals, research, bibliometrics.

  9. Uncertainty quantification and error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  10. ILRS Activities in Monitoring Systematic Errors in SLR Data

    Science.gov (United States)

    Pavlis, E. C.; Luceri, V.; Kuzmicz-Cieslak, M.; Bianco, G.

    2017-12-01

    The International Laser Ranging Service (ILRS) contributes to ITRF development unique information that only Satellite Laser Ranging—SLR is sensitive to: the definition of the origin, and in equal parts with VLBI, the scale of the model. For the development of ITRF2014, the ILRS analysts adopted a revision of the internal standards and procedures in generating our contribution from the eight ILRS Analysis Centers. The improved results for the ILRS components were reflected in the resulting new time series of the ITRF origin and scale, showing insignificant trends and tighter scatter. This effort was further extended after the release of ITRF2014, with the execution of a Pilot Project (PP) in the 2016-2017 timeframe that demonstrated the robust estimation of persistent systematic errors at the millimeter level. ILRS ASC is now turning this into an operational tool to monitor station performance and to generate a history of systematics at each station, to be used with each re-analysis for future ITRF model developments. This is part of a broader ILRS effort to improve the quality control of the data collection process as well as that of our products. To this end, the ILRS has established a "Quality Control Board—QCB" that comprises of members from the analysis and engineering groups, the Central Bureau, and even user groups with special interests. The QCB meets by telecon monthly and oversees the various ongoing projects, develops ideas for new tools and future products. This presentation will focus on the main topic with an update on the results so far, the schedule for the near future and its operational implementation, along with a brief description of upcoming new ILRS products.

  11. Managing Systematic Errors in a Polarimeter for the Storage Ring EDM Experiment

    Science.gov (United States)

    Stephenson, Edward J.; Storage Ring EDM Collaboration

    2011-05-01

    The EDDA plastic scintillator detector system at the Cooler Synchrotron (COSY) has been used to demonstrate that it is possible using a thick target at the edge of the circulating beam to meet the requirements for a polarimeter to be used in the search for an electric dipole moment on the proton or deuteron. Emphasizing elastic and low Q-value reactions leads to large analyzing powers and, along with thick targets, to efficiencies near 1%. Using only information obtained comparing count rates for oppositely vector-polarized beam states and a calibration of the sensitivity of the polarimeter to rate and geometric changes, the contribution of systematic errors can be suppressed below the level of one part per million.

  12. Effects of Systematic and Random Errors on the Retrieval of Particle Microphysical Properties from Multiwavelength Lidar Measurements Using Inversion with Regularization

    Science.gov (United States)

    Ramirez, Daniel Perez; Whiteman, David N.; Veselovskii, Igor; Kolgotin, Alexei; Korenskiy, Michael; Alados-Arboledas, Lucas

    2013-01-01

    In this work we study the effects of systematic and random errors on the inversion of multiwavelength (MW) lidar data using the well-known regularization technique to obtain vertically resolved aerosol microphysical properties. The software implementation used here was developed at the Physics Instrumentation Center (PIC) in Troitsk (Russia) in conjunction with the NASA/Goddard Space Flight Center. Its applicability to Raman lidar systems based on backscattering measurements at three wavelengths (355, 532 and 1064 nm) and extinction measurements at two wavelengths (355 and 532 nm) has been demonstrated widely. The systematic error sensitivity is quantified by first determining the retrieved parameters for a given set of optical input data consistent with three different sets of aerosol physical parameters. Then each optical input is perturbed by varying amounts and the inversion is repeated. Using bimodal aerosol size distributions, we find a generally linear dependence of the retrieved errors in the microphysical properties on the induced systematic errors in the optical data. For the retrievals of effective radius, number/surface/volume concentrations and fine-mode radius and volume, we find that these results are not significantly affected by the range of the constraints used in inversions. But significant sensitivity was found to the allowed range of the imaginary part of the particle refractive index. Our results also indicate that there exists an additive property for the deviations induced by the biases present in the individual optical data. This property permits the results here to be used to predict deviations in retrieved parameters when multiple input optical data are biased simultaneously as well as to study the influence of random errors on the retrievals. The above results are applied to questions regarding lidar design, in particular for the spaceborne multiwavelength lidar under consideration for the upcoming ACE mission.

  13. Edge profile analysis of Joint European Torus (JET) Thomson scattering data: Quantifying the systematic error due to edge localised mode synchronisation.

    Science.gov (United States)

    Leyland, M J; Beurskens, M N A; Flanagan, J C; Frassinetti, L; Gibson, K J; Kempenaars, M; Maslov, M; Scannell, R

    2016-01-01

    The Joint European Torus (JET) high resolution Thomson scattering (HRTS) system measures radial electron temperature and density profiles. One of the key capabilities of this diagnostic is measuring the steep pressure gradient, termed the pedestal, at the edge of JET plasmas. The pedestal is susceptible to limiting instabilities, such as Edge Localised Modes (ELMs), characterised by a periodic collapse of the steep gradient region. A common method to extract the pedestal width, gradient, and height, used on numerous machines, is by performing a modified hyperbolic tangent (mtanh) fit to overlaid profiles selected from the same region of the ELM cycle. This process of overlaying profiles, termed ELM synchronisation, maximises the number of data points defining the pedestal region for a given phase of the ELM cycle. When fitting to HRTS profiles, it is necessary to incorporate the diagnostic radial instrument function, particularly important when considering the pedestal width. A deconvolved fit is determined by a forward convolution method requiring knowledge of only the instrument function and profiles. The systematic error due to the deconvolution technique incorporated into the JET pedestal fitting tool has been documented by Frassinetti et al. [Rev. Sci. Instrum. 83, 013506 (2012)]. This paper seeks to understand and quantify the systematic error introduced to the pedestal width due to ELM synchronisation. Synthetic profiles, generated with error bars and point-to-point variation characteristic of real HRTS profiles, are used to evaluate the deviation from the underlying pedestal width. We find on JET that the ELM synchronisation systematic error is negligible in comparison to the statistical error when assuming ten overlaid profiles (typical for a pre-ELM fit to HRTS profiles). This confirms that fitting a mtanh to ELM synchronised profiles is a robust and practical technique for extracting the pedestal structure.

  14. Electronic laboratory system reduces errors in National Tuberculosis Program: a cluster randomized controlled trial.

    Science.gov (United States)

    Blaya, J A; Shin, S S; Yale, G; Suarez, C; Asencios, L; Contreras, C; Rodriguez, P; Kim, J; Cegielski, P; Fraser, H S F

    2010-08-01

    To evaluate the impact of the e-Chasqui laboratory information system in reducing reporting errors compared to the current paper system. Cluster randomized controlled trial in 76 health centers (HCs) between 2004 and 2008. Baseline data were collected every 4 months for 12 months. HCs were then randomly assigned to intervention (e-Chasqui) or control (paper). Further data were collected for the same months the following year. Comparisons were made between intervention and control HCs, and before and after the intervention. Intervention HCs had respectively 82% and 87% fewer errors in reporting results for drug susceptibility tests (2.1% vs. 11.9%, P = 0.001, OR 0.17, 95%CI 0.09-0.31) and cultures (2.0% vs. 15.1%, P Chasqui users sent on average three electronic error reports per week to the laboratories. e-Chasqui reduced the number of missing laboratory results at point-of-care health centers. Clinical users confirmed viewing electronic results not available on paper. Reporting errors to the laboratory using e-Chasqui promoted continuous quality improvement. The e-Chasqui laboratory information system is an important part of laboratory infrastructure improvements to support multidrug-resistant tuberculosis care in Peru.

  15. Using humor in systematic desensitization to reduce fear.

    Science.gov (United States)

    Ventis, W L; Higbee, G; Murdock, S A

    2001-04-01

    Effectiveness of systematic desensitization for fear reduction, using humorous hierarchy scenes without relaxation, was tested. Participants were 40 students highly fearful of spiders. Using a 24-item behavioral approach test with an American tarantula, participants were matched on fear level and randomly assigned to 1 of 3 treatment groups: (a) systematic desensitization, (b) humor desensitization, and (c) untreated controls. Each participant was seen for 6 sessions, including pretest and posttest. Analyses of covariance of posttest scores revealed that the 2 treatment groups showed greater reduction in fear than the controls on 3 measures but did not differ from each other. Therefore, humor in systematic desensitization reduced fear as effectively as more traditional desensitization. This finding may have therapeutic applications; however, it may also be applicable in advertising to desensitize fear of a dangerous product, such as cigarettes.

  16. Improving Type Error Messages in OCaml

    Directory of Open Access Journals (Sweden)

    Arthur Charguéraud

    2015-12-01

    Full Text Available Cryptic type error messages are a major obstacle to learning OCaml or other ML-based languages. In many cases, error messages cannot be interpreted without a sufficiently-precise model of the type inference algorithm. The problem of improving type error messages in ML has received quite a bit of attention over the past two decades, and many different strategies have been considered. The challenge is not only to produce error messages that are both sufficiently concise and systematically useful to the programmer, but also to handle a full-blown programming language and to cope with large-sized programs efficiently. In this work, we present a modification to the traditional ML type inference algorithm implemented in OCaml that, by significantly reducing the left-to-right bias, allows us to report error messages that are more helpful to the programmer. Our algorithm remains fully predictable and continues to produce fairly concise error messages that always help making some progress towards fixing the code. We implemented our approach as a patch to the OCaml compiler in just a few hundred lines of code. We believe that this patch should benefit not just to beginners, but also to experienced programs developing large-scale OCaml programs.

  17. Aliasing errors in measurements of beam position and ellipticity

    International Nuclear Information System (INIS)

    Ekdahl, Carl

    2005-01-01

    Beam position monitors (BPMs) are used in accelerators and ion experiments to measure currents, position, and azimuthal asymmetry. These usually consist of discrete arrays of electromagnetic field detectors, with detectors located at several equally spaced azimuthal positions at the beam tube wall. The discrete nature of these arrays introduces systematic errors into the data, independent of uncertainties resulting from signal noise, lack of recording dynamic range, etc. Computer simulations were used to understand and quantify these aliasing errors. If required, aliasing errors can be significantly reduced by employing more than the usual four detectors in the BPMs. These simulations show that the error in measurements of the centroid position of a large beam is indistinguishable from the error in the position of a filament. The simulations also show that aliasing errors in the measurement of beam ellipticity are very large unless the beam is accurately centered. The simulations were used to quantify the aliasing errors in beam parameter measurements during early experiments on the DARHT-II accelerator, demonstrating that they affected the measurements only slightly, if at all

  18. Aliasing errors in measurements of beam position and ellipticity

    Science.gov (United States)

    Ekdahl, Carl

    2005-09-01

    Beam position monitors (BPMs) are used in accelerators and ion experiments to measure currents, position, and azimuthal asymmetry. These usually consist of discrete arrays of electromagnetic field detectors, with detectors located at several equally spaced azimuthal positions at the beam tube wall. The discrete nature of these arrays introduces systematic errors into the data, independent of uncertainties resulting from signal noise, lack of recording dynamic range, etc. Computer simulations were used to understand and quantify these aliasing errors. If required, aliasing errors can be significantly reduced by employing more than the usual four detectors in the BPMs. These simulations show that the error in measurements of the centroid position of a large beam is indistinguishable from the error in the position of a filament. The simulations also show that aliasing errors in the measurement of beam ellipticity are very large unless the beam is accurately centered. The simulations were used to quantify the aliasing errors in beam parameter measurements during early experiments on the DARHT-II accelerator, demonstrating that they affected the measurements only slightly, if at all.

  19. Error budget calculations in laboratory medicine: linking the concepts of biological variation and allowable medical errors

    NARCIS (Netherlands)

    Stroobants, A. K.; Goldschmidt, H. M. J.; Plebani, M.

    2003-01-01

    Background: Random, systematic and sporadic errors, which unfortunately are not uncommon in laboratory medicine, can have a considerable impact on the well being of patients. Although somewhat difficult to attain, our main goal should be to prevent all possible errors. A good insight on error-prone

  20. Reducing wrong patient selection errors: exploring the design space of user interface techniques.

    Science.gov (United States)

    Sopan, Awalin; Plaisant, Catherine; Powsner, Seth; Shneiderman, Ben

    2014-01-01

    Wrong patient selection errors are a major issue for patient safety; from ordering medication to performing surgery, the stakes are high. Widespread adoption of Electronic Health Record (EHR) and Computerized Provider Order Entry (CPOE) systems makes patient selection using a computer screen a frequent task for clinicians. Careful design of the user interface can help mitigate the problem by helping providers recall their patients' identities, accurately select their names, and spot errors before orders are submitted. We propose a catalog of twenty seven distinct user interface techniques, organized according to a task analysis. An associated video demonstrates eighteen of those techniques. EHR designers who consider a wider range of human-computer interaction techniques could reduce selection errors, but verification of efficacy is still needed.

  1. Simulating systematic errors in X-ray absorption spectroscopy experiments: Sample and beam effects

    Energy Technology Data Exchange (ETDEWEB)

    Curis, Emmanuel [Laboratoire de Biomathematiques, Faculte de Pharmacie, Universite Rene, Descartes (Paris V)-4, Avenue de l' Observatoire, 75006 Paris (France)]. E-mail: emmanuel.curis@univ-paris5.fr; Osan, Janos [KFKI Atomic Energy Research Institute (AEKI)-P.O. Box 49, H-1525 Budapest (Hungary); Falkenberg, Gerald [Hamburger Synchrotronstrahlungslabor (HASYLAB), Deutsches Elektronen-Synchrotron (DESY)-Notkestrasse 85, 22607 Hamburg (Germany); Benazeth, Simone [Laboratoire de Biomathematiques, Faculte de Pharmacie, Universite Rene, Descartes (Paris V)-4, Avenue de l' Observatoire, 75006 Paris (France); Laboratoire d' Utilisation du Rayonnement Electromagnetique (LURE)-Ba-hat timent 209D, Campus d' Orsay, 91406 Orsay (France); Toeroek, Szabina [KFKI Atomic Energy Research Institute (AEKI)-P.O. Box 49, H-1525 Budapest (Hungary)

    2005-07-15

    The article presents an analytical model to simulate experimental imperfections in the realization of an X-ray absorption spectroscopy experiment, performed in transmission or fluorescence mode. Distinction is made between sources of systematic errors on a time-scale basis, to select the more appropriate model for their handling. For short time-scale, statistical models are the most suited. For large time-scale, the model is developed for sample and beam imperfections: mainly sample inhomogeneity, sample self-absorption, beam achromaticity. The ability of this model to reproduce the effects of these imperfections is exemplified, and the model is validated on real samples. Various potential application fields of the model are then presented.

  2. Simulating systematic errors in X-ray absorption spectroscopy experiments: Sample and beam effects

    International Nuclear Information System (INIS)

    Curis, Emmanuel; Osan, Janos; Falkenberg, Gerald; Benazeth, Simone; Toeroek, Szabina

    2005-01-01

    The article presents an analytical model to simulate experimental imperfections in the realization of an X-ray absorption spectroscopy experiment, performed in transmission or fluorescence mode. Distinction is made between sources of systematic errors on a time-scale basis, to select the more appropriate model for their handling. For short time-scale, statistical models are the most suited. For large time-scale, the model is developed for sample and beam imperfections: mainly sample inhomogeneity, sample self-absorption, beam achromaticity. The ability of this model to reproduce the effects of these imperfections is exemplified, and the model is validated on real samples. Various potential application fields of the model are then presented

  3. Prevalence and reporting of recruitment, randomisation and treatment errors in clinical trials: A systematic review.

    Science.gov (United States)

    Yelland, Lisa N; Kahan, Brennan C; Dent, Elsa; Lee, Katherine J; Voysey, Merryn; Forbes, Andrew B; Cook, Jonathan A

    2018-06-01

    Background/aims In clinical trials, it is not unusual for errors to occur during the process of recruiting, randomising and providing treatment to participants. For example, an ineligible participant may inadvertently be randomised, a participant may be randomised in the incorrect stratum, a participant may be randomised multiple times when only a single randomisation is permitted or the incorrect treatment may inadvertently be issued to a participant at randomisation. Such errors have the potential to introduce bias into treatment effect estimates and affect the validity of the trial, yet there is little motivation for researchers to report these errors and it is unclear how often they occur. The aim of this study is to assess the prevalence of recruitment, randomisation and treatment errors and review current approaches for reporting these errors in trials published in leading medical journals. Methods We conducted a systematic review of individually randomised, phase III, randomised controlled trials published in New England Journal of Medicine, Lancet, Journal of the American Medical Association, Annals of Internal Medicine and British Medical Journal from January to March 2015. The number and type of recruitment, randomisation and treatment errors that were reported and how they were handled were recorded. The corresponding authors were contacted for a random sample of trials included in the review and asked to provide details on unreported errors that occurred during their trial. Results We identified 241 potentially eligible articles, of which 82 met the inclusion criteria and were included in the review. These trials involved a median of 24 centres and 650 participants, and 87% involved two treatment arms. Recruitment, randomisation or treatment errors were reported in 32 in 82 trials (39%) that had a median of eight errors. The most commonly reported error was ineligible participants inadvertently being randomised. No mention of recruitment, randomisation

  4. Galaxy Cluster Shapes and Systematic Errors in the Hubble Constant as Determined by the Sunyaev-Zel'dovich Effect

    Science.gov (United States)

    Sulkanen, Martin E.; Joy, M. K.; Patel, S. K.

    1998-01-01

    Imaging of the Sunyaev-Zei'dovich (S-Z) effect in galaxy clusters combined with the cluster plasma x-ray diagnostics can measure the cosmic distance scale to high accuracy. However, projecting the inverse-Compton scattering and x-ray emission along the cluster line-of-sight will introduce systematic errors in the Hubble constant, H$-O$, because the true shape of the cluster is not known. This effect remains present for clusters that are otherwise chosen to avoid complications for the S-Z and x-ray analysis, such as plasma temperature variations, cluster substructure, or cluster dynamical evolution. In this paper we present a study of the systematic errors in the value of H$-0$, as determined by the x-ray and S-Z properties of a theoretical sample of triaxial isothermal 'beta-model' clusters, caused by projection effects and observer orientation relative to the model clusters' principal axes. The model clusters are not generated as ellipsoids of rotation, but have three independent 'core radii', as well as a random orientation to the plane of the sky.

  5. Customization of user interfaces to reduce errors and enhance user acceptance.

    Science.gov (United States)

    Burkolter, Dina; Weyers, Benjamin; Kluge, Annette; Luther, Wolfram

    2014-03-01

    Customization is assumed to reduce error and increase user acceptance in the human-machine relation. Reconfiguration gives the operator the option to customize a user interface according to his or her own preferences. An experimental study with 72 computer science students using a simulated process control task was conducted. The reconfiguration group (RG) interactively reconfigured their user interfaces and used the reconfigured user interface in the subsequent test whereas the control group (CG) used a default user interface. Results showed significantly lower error rates and higher acceptance of the RG compared to the CG while there were no significant differences between the groups regarding situation awareness and mental workload. Reconfiguration seems to be promising and therefore warrants further exploration. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  6. NLO error propagation exercise: statistical results

    International Nuclear Information System (INIS)

    Pack, D.J.; Downing, D.J.

    1985-09-01

    Error propagation is the extrapolation and cumulation of uncertainty (variance) above total amounts of special nuclear material, for example, uranium or 235 U, that are present in a defined location at a given time. The uncertainty results from the inevitable inexactness of individual measurements of weight, uranium concentration, 235 U enrichment, etc. The extrapolated and cumulated uncertainty leads directly to quantified limits of error on inventory differences (LEIDs) for such material. The NLO error propagation exercise was planned as a field demonstration of the utilization of statistical error propagation methodology at the Feed Materials Production Center in Fernald, Ohio from April 1 to July 1, 1983 in a single material balance area formed specially for the exercise. Major elements of the error propagation methodology were: variance approximation by Taylor Series expansion; variance cumulation by uncorrelated primary error sources as suggested by Jaech; random effects ANOVA model estimation of variance effects (systematic error); provision for inclusion of process variance in addition to measurement variance; and exclusion of static material. The methodology was applied to material balance area transactions from the indicated time period through a FORTRAN computer code developed specifically for this purpose on the NLO HP-3000 computer. This paper contains a complete description of the error propagation methodology and a full summary of the numerical results of applying the methodlogy in the field demonstration. The error propagation LEIDs did encompass the actual uranium and 235 U inventory differences. Further, one can see that error propagation actually provides guidance for reducing inventory differences and LEIDs in future time periods

  7. Reducing number entry errors: solving a widespread, serious problem.

    Science.gov (United States)

    Thimbleby, Harold; Cairns, Paul

    2010-10-06

    Number entry is ubiquitous: it is required in many fields including science, healthcare, education, government, mathematics and finance. People entering numbers are to be expected to make errors, but shockingly few systems make any effort to detect, block or otherwise manage errors. Worse, errors may be ignored but processed in arbitrary ways, with unintended results. A standard class of error (defined in the paper) is an 'out by 10 error', which is easily made by miskeying a decimal point or a zero. In safety-critical domains, such as drug delivery, out by 10 errors generally have adverse consequences. Here, we expose the extent of the problem of numeric errors in a very wide range of systems. An analysis of better error management is presented: under reasonable assumptions, we show that the probability of out by 10 errors can be halved by better user interface design. We provide a demonstration user interface to show that the approach is practical.To kill an error is as good a service as, and sometimes even better than, the establishing of a new truth or fact. (Charles Darwin 1879 [2008], p. 229).

  8. Systematic errors in the readings of track etch neutron dosemeters caused by the energy dependence of response

    International Nuclear Information System (INIS)

    Tanner, R.J.; Thomas, D.J.; Bartlett, D.T.; Horwood, N.

    1999-01-01

    A study has been performed to assess the extent to which variations in the energy dependence of response of neutron personal dosemeters can cause systematic errors in readings obtained in workplace fields. This involved a detailed determination of the response functions of personal dosemeters used in the UK. These response functions were folded with workplace spectra to ascertain the under- or over-response in workplace fields

  9. Systematic errors in the readings of track etch neutron dosemeters caused by the energy dependence of response

    CERN Document Server

    Tanner, R J; Bartlett, D T; Horwood, N

    1999-01-01

    A study has been performed to assess the extent to which variations in the energy dependence of response of neutron personal dosemeters can cause systematic errors in readings obtained in workplace fields. This involved a detailed determination of the response functions of personal dosemeters used in the UK. These response functions were folded with workplace spectra to ascertain the under- or over-response in workplace fields.

  10. Double checking medicines: defence against error or contributory factor?

    Science.gov (United States)

    Armitage, Gerry

    2008-08-01

    The double checking of medicines in health care is a contestable procedure. It occupies an obvious position in health care practice and is understood to be an effective defence against medication error but the process is variable and the outcomes have not been exposed to testing. This paper presents an appraisal of the process using data from part of a larger study on the contributory factors in medication errors and their reporting. Previous research studies are reviewed; data are analysed from a review of 991 drug error reports and a subsequent series of 40 in-depth interviews with health professionals in an acute hospital in northern England. The incident reports showed that errors occurred despite double checking but that action taken did not appear to investigate the checking process. Most interview participants (34) talked extensively about double checking but believed the process to be inconsistent. Four key categories were apparent: deference to authority, reduction of responsibility, automatic processing and lack of time. Solutions to the problems were also offered, which are discussed with several recommendations. Double checking medicines should be a selective and systematic procedure informed by key principles and encompassing certain behaviours. Psychological research may be instructive in reducing checking errors but the aviation industry may also have a part to play in increasing error wisdom and reducing risk.

  11. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.jp; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro [Department of Radiation Oncology and Image-applied Therapy, Kyoto University, 54 Shogoin-Kawaharacho, Sakyo, Kyoto 606-8507 (Japan)

    2016-09-15

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.

  12. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    International Nuclear Information System (INIS)

    Matsuo, Yukinori; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro

    2016-01-01

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.

  13. Noncontact thermometry via laser pumped, thermographic phosphors: Characterization of systematic errors and industrial applications

    International Nuclear Information System (INIS)

    Gillies, G.T.; Dowell, L.J.; Lutz, W.N.; Allison, S.W.; Cates, M.R.; Noel, B.W.; Franks, L.A.; Borella, H.M.

    1987-10-01

    There are a growing number of industrial measurement situations that call for a high precision, noncontact method of thermometry. Our collaboration has been successful in developing one such method based on the laser-induced fluorescence of rare-earth-doped ceramic phosphors like Y 2 O 3 :Eu. In this paper, we summarize the results of characterization studies aimed at identifying the sources of systematic error in a laboratory-grade version of the method. We then go on to present data from measurements made in the afterburner plume of a jet turbine and inside an operating permanent magnet motor. 12 refs., 6 figs

  14. Learning from Errors

    Directory of Open Access Journals (Sweden)

    MA. Lendita Kryeziu

    2015-06-01

    Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.

  15. Haplotype reconstruction error as a classical misclassification problem: introducing sensitivity and specificity as error measures.

    Directory of Open Access Journals (Sweden)

    Claudia Lamina

    Full Text Available BACKGROUND: Statistically reconstructing haplotypes from single nucleotide polymorphism (SNP genotypes, can lead to falsely classified haplotypes. This can be an issue when interpreting haplotype association results or when selecting subjects with certain haplotypes for subsequent functional studies. It was our aim to quantify haplotype reconstruction error and to provide tools for it. METHODS AND RESULTS: By numerous simulation scenarios, we systematically investigated several error measures, including discrepancy, error rate, and R(2, and introduced the sensitivity and specificity to this context. We exemplified several measures in the KORA study, a large population-based study from Southern Germany. We find that the specificity is slightly reduced only for common haplotypes, while the sensitivity was decreased for some, but not all rare haplotypes. The overall error rate was generally increasing with increasing number of loci, increasing minor allele frequency of SNPs, decreasing correlation between the alleles and increasing ambiguity. CONCLUSIONS: We conclude that, with the analytical approach presented here, haplotype-specific error measures can be computed to gain insight into the haplotype uncertainty. This method provides the information, if a specific risk haplotype can be expected to be reconstructed with rather no or high misclassification and thus on the magnitude of expected bias in association estimates. We also illustrate that sensitivity and specificity separate two dimensions of the haplotype reconstruction error, which completely describe the misclassification matrix and thus provide the prerequisite for methods accounting for misclassification.

  16. Barriers to reporting medication errors and near misses among nurses: A systematic review.

    Science.gov (United States)

    Vrbnjak, Dominika; Denieffe, Suzanne; O'Gorman, Claire; Pajnkihar, Majda

    2016-11-01

    To explore barriers to nurses' reporting of medication errors and near misses in hospital settings. Systematic review. Medline, CINAHL, PubMed and Cochrane Library in addition to Google and Google Scholar and reference lists of relevant studies published in English between January 1981 and April 2015 were searched for relevant qualitative, quantitative or mixed methods empirical studies or unpublished PhD theses. Papers with a primary focus on barriers to reporting medication errors and near misses in nursing were included. The titles and abstracts of the search results were assessed for eligibility and relevance by one of the authors. After retrieval of the full texts, two of the authors independently made decisions concerning the final inclusion and these were validated by the third reviewer. Three authors independently assessed methodological quality of studies. Relevant data were extracted and findings were synthesised using thematic synthesis. From 4038 identified records, 38 studies were included in the synthesis. Findings suggest that organizational barriers such as culture, the reporting system and management behaviour in addition to personal and professional barriers such as fear, accountability and characteristics of nurses are barriers to reporting medication errors. To overcome reported barriers it is necessary to develop a non-blaming, non-punitive and non-fearful learning culture at unit and organizational level. Anonymous, effective, uncomplicated and efficient reporting systems and supportive management behaviour that provides open feedback to nurses is needed. Nurses are accountable for patients' safety, so they need to be educated and skilled in error management. Lack of research into barriers to reporting of near misses' and low awareness of reporting suggests the need for further research and development of educational and management approaches to overcome these barriers. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Implication of spot position error on plan quality and patient safety in pencil-beam-scanning proton therapy

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Juan; Beltran, Chris J., E-mail: beltran.chris@mayo.edu; Herman, Michael G. [Division of Medical Physics, Department of Radiation Oncology, Mayo Clinic, Rochester, Minnesota 55905 (United States)

    2014-08-15

    Purpose: To quantitatively and systematically assess dosimetric effects induced by spot positioning error as a function of spot spacing (SS) on intensity-modulated proton therapy (IMPT) plan quality and to facilitate evaluation of safety tolerance limits on spot position. Methods: Spot position errors (PE) ranging from 1 to 2 mm were simulated. Simple plans were created on a water phantom, and IMPT plans were calculated on two pediatric patients with a brain tumor of 28 and 3 cc, respectively, using a commercial planning system. For the phantom, a uniform dose was delivered to targets located at different depths from 10 to 20 cm with various field sizes from 2{sup 2} to 15{sup 2} cm{sup 2}. Two nominal spot sizes, 4.0 and 6.6 mm of 1 σ in water at isocenter, were used for treatment planning. The SS ranged from 0.5 σ to 1.5 σ, which is 2–6 mm for the small spot size and 3.3–9.9 mm for the large spot size. Various perturbation scenarios of a single spot error and systematic and random multiple spot errors were studied. To quantify the dosimetric effects, percent dose error (PDE) depth profiles and the value of percent dose error at the maximum dose difference (PDE [ΔDmax]) were used for evaluation. Results: A pair of hot and cold spots was created per spot shift. PDE[ΔDmax] is found to be a complex function of PE, SS, spot size, depth, and global spot distribution that can be well defined in simple models. For volumetric targets, the PDE [ΔDmax] is not noticeably affected by the change of field size or target volume within the studied ranges. In general, reducing SS decreased the dose error. For the facility studied, given a single spot error with a PE of 1.2 mm and for both spot sizes, a SS of 1σ resulted in a 2% maximum dose error; a SS larger than 1.25 σ substantially increased the dose error and its sensitivity to PE. A similar trend was observed in multiple spot errors (both systematic and random errors). Systematic PE can lead to noticeable hot

  18. Systematic Error of Acoustic Particle Image Velocimetry and Its Correction

    Directory of Open Access Journals (Sweden)

    Mickiewicz Witold

    2014-08-01

    Full Text Available Particle Image Velocimetry is getting more and more often the method of choice not only for visualization of turbulent mass flows in fluid mechanics, but also in linear and non-linear acoustics for non-intrusive visualization of acoustic particle velocity. Particle Image Velocimetry with low sampling rate (about 15Hz can be applied to visualize the acoustic field using the acquisition synchronized to the excitation signal. Such phase-locked PIV technique is described and used in experiments presented in the paper. The main goal of research was to propose a model of PIV systematic error due to non-zero time interval between acquisitions of two images of the examined sound field seeded with tracer particles, what affects the measurement of complex acoustic signals. Usefulness of the presented model is confirmed experimentally. The correction procedure, based on the proposed model, applied to measurement data increases the accuracy of acoustic particle velocity field visualization and creates new possibilities in observation of sound fields excited with multi-tonal or band-limited noise signals.

  19. Near field communications technology and the potential to reduce medication errors through multidisciplinary application

    LENUS (Irish Health Repository)

    O’Connell, Emer

    2016-07-01

    Patient safety requires optimal management of medications. Electronic systems are encouraged to reduce medication errors. Near field communications (NFC) is an emerging technology that may be used to develop novel medication management systems.

  20. Applying Intelligent Algorithms to Automate the Identification of Error Factors.

    Science.gov (United States)

    Jin, Haizhe; Qu, Qingxing; Munechika, Masahiko; Sano, Masataka; Kajihara, Chisato; Duffy, Vincent G; Chen, Han

    2018-05-03

    Medical errors are the manifestation of the defects occurring in medical processes. Extracting and identifying defects as medical error factors from these processes are an effective approach to prevent medical errors. However, it is a difficult and time-consuming task and requires an analyst with a professional medical background. The issues of identifying a method to extract medical error factors and reduce the extraction difficulty need to be resolved. In this research, a systematic methodology to extract and identify error factors in the medical administration process was proposed. The design of the error report, extraction of the error factors, and identification of the error factors were analyzed. Based on 624 medical error cases across four medical institutes in both Japan and China, 19 error-related items and their levels were extracted. After which, they were closely related to 12 error factors. The relational model between the error-related items and error factors was established based on a genetic algorithm (GA)-back-propagation neural network (BPNN) model. Additionally, compared to GA-BPNN, BPNN, partial least squares regression and support vector regression, GA-BPNN exhibited a higher overall prediction accuracy, being able to promptly identify the error factors from the error-related items. The combination of "error-related items, their different levels, and the GA-BPNN model" was proposed as an error-factor identification technology, which could automatically identify medical error factors.

  1. Human factors interventions to reduce human errors and improve productivity in maintenance tasks

    International Nuclear Information System (INIS)

    Isoda, Hachiro; Yasutake, J.Y.

    1992-01-01

    This paper describes work in progress to develop interventions to reduce human errors and increase maintenance productivity in nuclear power plants. The effort is part of a two-phased Human Factors research program being conducted jointly by the Central Research Institute of Electric Power Industry (CRIEPI) in Japan and the Electric Power Research Institute (EPRI) in the United States. The overall objective of this joint research program is to identify critical maintenance tasks and to develop, implement and evaluate interventions which have high potential for reducing human errors or increasing maintenance productivity. As a result of the Phase 1 effort, ten critical maintenance tasks were identified. For these tasks, over 25 candidate interventions were identified for potential development. After careful analysis, seven interventions were selected for development during Phase 2. This paper describes the methodology used to analyze and identify the most critical tasks, the process of identifying and developing selected interventions and some of the initial results. (author)

  2. Errors prevention in manufacturing process through integration of Poka Yoke and TRIZ

    Science.gov (United States)

    Helmi, Syed Ahmad; Nordin, Nur Nashwa; Hisjam, Muhammad

    2017-11-01

    Integration of Poka Yoke and TRIZ is a method of solving problems by using a different approach. Poka Yoke is a trial and error method while TRIZ is using a systematic approach. The main purpose of this technique is to get rid of product defects by preventing or correcting errors as soon as possible. Blame the workers for their mistakes is not the best way, but the work process should be reviewed so that every workers behavior or movement may not cause errors. This study is to demonstrate the importance of using both of these methods in which everyone in the industry needs to improve quality, increase productivity and at the same time reducing production cost.

  3. Identification and Assessment of Human Errors in Postgraduate Endodontic Students of Kerman University of Medical Sciences by Using the SHERPA Method

    Directory of Open Access Journals (Sweden)

    Saman Dastaran

    2016-03-01

    Full Text Available Introduction: Human errors are the cause of many accidents, including industrial and medical, therefore finding out an approach for identifying and reducing them is very important. Since no study has been done about human errors in the dental field, this study aimed to identify and assess human errors in postgraduate endodontic students of Kerman University of Medical Sciences by using the SHERPA Method. Methods: This cross-sectional study was performed during year 2014. Data was collected using task observation and interviewing postgraduate endodontic students. Overall, 10 critical tasks, which were most likely to cause harm to patients were determined. Next, Hierarchical Task Analysis (HTA was conducted and human errors in each task were identified by the Systematic Human Error Reduction Prediction Approach (SHERPA technique worksheets. Results: After analyzing the SHERPA worksheets, 90 human errors were identified including (67.7% action errors, (13.3% checking errors, (8.8% selection errors, (5.5% retrieval errors and (4.4% communication errors. As a result, most of them were action errors and less of them were communication errors. Conclusions: The results of the study showed that the highest percentage of errors and the highest level of risk were associated with action errors, therefore, to reduce the occurrence of such errors and limit their consequences, control measures including periodical training of work procedures, providing work check-lists, development of guidelines and establishment of a systematic and standardized reporting system, should be put in place. Regarding the results of this study, the control of recovery errors with the highest percentage of undesirable risk and action errors with the highest frequency of errors should be in the priority of control

  4. Human errors and mistakes

    International Nuclear Information System (INIS)

    Wahlstroem, B.

    1993-01-01

    Human errors have a major contribution to the risks for industrial accidents. Accidents have provided important lesson making it possible to build safer systems. In avoiding human errors it is necessary to adapt the systems to their operators. The complexity of modern industrial systems is however increasing the danger of system accidents. Models of the human operator have been proposed, but the models are not able to give accurate predictions of human performance. Human errors can never be eliminated, but their frequency can be decreased by systematic efforts. The paper gives a brief summary of research in human error and it concludes with suggestions for further work. (orig.)

  5. Reducing image interpretation errors – Do communication strategies undermine this?

    International Nuclear Information System (INIS)

    Snaith, B.; Hardy, M.; Lewis, E.F.

    2014-01-01

    Introduction: Errors in the interpretation of diagnostic images in the emergency department are a persistent problem internationally. To address this issue, a number of risk reduction strategies have been suggested but only radiographer abnormality detection schemes (RADS) have been widely implemented in the UK. This study considers the variation in RADS operation and communication in light of technological advances and changes in service operation. Methods: A postal survey of all NHS hospitals operating either an Emergency Department or Minor Injury Unit and a diagnostic imaging (radiology) department (n = 510) was undertaken between July and August 2011. The questionnaire was designed to elicit information on emergency service provision and details of RADS. Results: 325 questionnaires were returned (n = 325/510; 63.7%). The majority of sites (n = 288/325; 88.6%) operated a RADS with the majority (n = 227/288; 78.8%) employing a visual ‘flagging’ system as the only method of communication although symbols used were inconsistent and contradictory across sites. 61 sites communicated radiographer findings through a written proforma (paper or electronic) but this was run in conjunction with a flagging system at 50 sites. The majority of sites did not have guidance on the scope or operation of the ‘flagging’ or written communication system in use. Conclusions: RADS is an established clinical intervention to reduce errors in diagnostic image interpretation within the emergency setting. The lack of standardisation in communication processes and practices alongside the rapid adoption of technology has increased the potential for error and miscommunication

  6. ERESYE - a expert system for the evaluation of uncertainties related to systematic experimental errors; ERESYE - un sistema esperto per la valutazione di incertezze correlate ad errori sperimentali sistematici

    Energy Technology Data Exchange (ETDEWEB)

    Martinelli, T; Panini, G C [ENEA - Dipartimento Tecnologie Intersettoriali di Base, Centro Ricerche Energia, Casaccia (Italy); Amoroso, A [Ricercatore Ospite (Italy)

    1989-11-15

    Information about systematic errors are not given In EXFOR, the data base of nuclear experimental measurements: their assessment is committed to the ability of the evaluator. A tool Is needed which performs this task in a fully automatic way or, at least, gives a valuable aid. The expert system ERESYE has been implemented for investigating the feasibility of an automatic evaluation of the systematic errors in the experiments. The features of the project which led to the implementation of the system are presented. (author)

  7. Random error in cardiovascular meta-analyses

    DEFF Research Database (Denmark)

    Albalawi, Zaina; McAlister, Finlay A; Thorlund, Kristian

    2013-01-01

    BACKGROUND: Cochrane reviews are viewed as the gold standard in meta-analyses given their efforts to identify and limit systematic error which could cause spurious conclusions. The potential for random error to cause spurious conclusions in meta-analyses is less well appreciated. METHODS: We exam...

  8. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  9. Comparison between calorimeter and HLNC errors

    International Nuclear Information System (INIS)

    Goldman, A.S.; De Ridder, P.; Laszlo, G.

    1991-01-01

    This paper summarizes an error analysis that compares systematic and random errors of total plutonium mass estimated for high-level neutron coincidence counter (HLNC) and calorimeter measurements. This task was part of an International Atomic Energy Agency (IAEA) study on the comparison of the two instruments to determine if HLNC measurement errors met IAEA standards and if the calorimeter gave ''significantly'' better precision. Our analysis was based on propagation of error models that contained all known sources of errors including uncertainties associated with plutonium isotopic measurements. 5 refs., 2 tabs

  10. Setup accuracy of stereoscopic X-ray positioning with automated correction for rotational errors in patients treated with conformal arc radiotherapy for prostate cancer

    International Nuclear Information System (INIS)

    Soete, Guy; Verellen, Dirk; Tournel, Koen; Storme, Guy

    2006-01-01

    We evaluated setup accuracy of NovalisBody stereoscopic X-ray positioning with automated correction for rotational errors with the Robotics Tilt Module in patients treated with conformal arc radiotherapy for prostate cancer. The correction of rotational errors was shown to reduce random and systematic errors in all directions. (NovalisBody TM and Robotics Tilt Module TM are products of BrainLAB A.G., Heimstetten, Germany)

  11. Error calculations statistics in radioactive measurements

    International Nuclear Information System (INIS)

    Verdera, Silvia

    1994-01-01

    Basic approach and procedures frequently used in the practice of radioactive measurements.Statistical principles applied are part of Good radiopharmaceutical Practices and quality assurance.Concept of error, classification as systematic and random errors.Statistic fundamentals,probability theories, populations distributions, Bernoulli, Poisson,Gauss, t-test distribution,Ξ2 test, error propagation based on analysis of variance.Bibliography.z table,t-test table, Poisson index ,Ξ2 test

  12. Systematic review of active workplace interventions to reduce sickness absence

    OpenAIRE

    Odeen, Magnus; Magnussen, Liv Heide; Mæland, Silje; Larun, Lillebeth; Eriksen, Hege Randi; Tveito, Torill Helene

    2013-01-01

    Background: The workplace is used as a setting for interventions to prevent and reduce sickness absence, regardless of the specific medical conditions and diagnoses. Aims: To give an overview of the general effectiveness of active workplace interventions aimed at preventing and reducing sickness absence. Methods: We systematically searched PubMed, Embase, Psych-info, and ISI web of knowledge on 27 December 2011. Inclusion criteria were (i) participants over 18 years old with ...

  13. Using Healthcare Failure Mode and Effect Analysis to reduce medication errors in the process of drug prescription, validation and dispensing in hospitalised patients.

    Science.gov (United States)

    Vélez-Díaz-Pallarés, Manuel; Delgado-Silveira, Eva; Carretero-Accame, María Emilia; Bermejo-Vicedo, Teresa

    2013-01-01

    To identify actions to reduce medication errors in the process of drug prescription, validation and dispensing, and to evaluate the impact of their implementation. A Health Care Failure Mode and Effect Analysis (HFMEA) was supported by a before-and-after medication error study to measure the actual impact on error rate after the implementation of corrective actions in the process of drug prescription, validation and dispensing in wards equipped with computerised physician order entry (CPOE) and unit-dose distribution system (788 beds out of 1080) in a Spanish university hospital. The error study was carried out by two observers who reviewed medication orders on a daily basis to register prescription errors by physicians and validation errors by pharmacists. Drugs dispensed in the unit-dose trolleys were reviewed for dispensing errors. Error rates were expressed as the number of errors for each process divided by the total opportunities for error in that process times 100. A reduction in prescription errors was achieved by providing training for prescribers on CPOE, updating prescription procedures, improving clinical decision support and automating the software connection to the hospital census (relative risk reduction (RRR), 22.0%; 95% CI 12.1% to 31.8%). Validation errors were reduced after optimising time spent in educating pharmacy residents on patient safety, developing standardised validation procedures and improving aspects of the software's database (RRR, 19.4%; 95% CI 2.3% to 36.5%). Two actions reduced dispensing errors: reorganising the process of filling trolleys and drawing up a protocol for drug pharmacy checking before delivery (RRR, 38.5%; 95% CI 14.1% to 62.9%). HFMEA facilitated the identification of actions aimed at reducing medication errors in a healthcare setting, as the implementation of several of these led to a reduction in errors in the process of drug prescription, validation and dispensing.

  14. The probability and the management of human error

    International Nuclear Information System (INIS)

    Dufey, R.B.; Saull, J.W.

    2004-01-01

    Embedded within modern technological systems, human error is the largest, and indeed dominant contributor to accident cause. The consequences dominate the risk profiles for nuclear power and for many other technologies. We need to quantify the probability of human error for the system as an integral contribution within the overall system failure, as it is generally not separable or predictable for actual events. We also need to provide a means to manage and effectively reduce the failure (error) rate. The fact that humans learn from their mistakes allows a new determination of the dynamic probability and human failure (error) rate in technological systems. The result is consistent with and derived from the available world data for modern technological systems. Comparisons are made to actual data from large technological systems and recent catastrophes. Best estimate values and relationships can be derived for both the human error rate, and for the probability. We describe the potential for new approaches to the management of human error and safety indicators, based on the principles of error state exclusion and of the systematic effect of learning. A new equation is given for the probability of human error (λ) that combines the influences of early inexperience, learning from experience (ε) and stochastic occurrences with having a finite minimum rate, this equation is λ 5.10 -5 + ((1/ε) - 5.10 -5 ) exp(-3*ε). The future failure rate is entirely determined by the experience: thus the past defines the future

  15. A procedure for the significance testing of unmodeled errors in GNSS observations

    Science.gov (United States)

    Li, Bofeng; Zhang, Zhetao; Shen, Yunzhong; Yang, Ling

    2018-01-01

    It is a crucial task to establish a precise mathematical model for global navigation satellite system (GNSS) observations in precise positioning. Due to the spatiotemporal complexity of, and limited knowledge on, systematic errors in GNSS observations, some residual systematic errors would inevitably remain even after corrected with empirical model and parameterization. These residual systematic errors are referred to as unmodeled errors. However, most of the existing studies mainly focus on handling the systematic errors that can be properly modeled and then simply ignore the unmodeled errors that may actually exist. To further improve the accuracy and reliability of GNSS applications, such unmodeled errors must be handled especially when they are significant. Therefore, a very first question is how to statistically validate the significance of unmodeled errors. In this research, we will propose a procedure to examine the significance of these unmodeled errors by the combined use of the hypothesis tests. With this testing procedure, three components of unmodeled errors, i.e., the nonstationary signal, stationary signal and white noise, are identified. The procedure is tested by using simulated data and real BeiDou datasets with varying error sources. The results show that the unmodeled errors can be discriminated by our procedure with approximately 90% confidence. The efficiency of the proposed procedure is further reassured by applying the time-domain Allan variance analysis and frequency-domain fast Fourier transform. In summary, the spatiotemporally correlated unmodeled errors are commonly existent in GNSS observations and mainly governed by the residual atmospheric biases and multipath. Their patterns may also be impacted by the receiver.

  16. Comparison of Three Methods of Reducing Test Anxiety: Systematic Desensitization, Implosive Therapy, and Study Counseling

    Science.gov (United States)

    Cornish, Richard D.; Dilley, Josiah S.

    1973-01-01

    Systematic desensitization, implosive therapy, and study counseling have all been effective in reducing test anxiety. In addition, systematic desensitization has been compared to study counseling for effectiveness. This study compares all three methods and suggests that systematic desentization is more effective than the others, and that implosive…

  17. Quantification and handling of sampling errors in instrumental measurements: a case study

    DEFF Research Database (Denmark)

    Andersen, Charlotte Møller; Bro, R.

    2004-01-01

    in certain situations, the effect of systematic errors is also considerable. The relevant errors contributing to the prediction error are: error in instrumental measurements (x-error), error in reference measurements (y-error), error in the estimated calibration model (regression coefficient error) and model...

  18. Systematic review of interventions for reducing occupational stress in health care workers

    NARCIS (Netherlands)

    Ruotsalainen, Jani; Serra, Consol; Marine, Albert; Verbeek, Jos

    2008-01-01

    This study evaluated the effectiveness of interventions in reducing stress at work among health care workers. A systematic search was conducted of the literature on reducing stress or burnout in health care workers. The quality of the studies found was then appraised and the results combined. A

  19. Measurement of Systematic Error Effects for a Sensitive Storage Ring EDM Polarimeter

    Science.gov (United States)

    Imig, Astrid; Stephenson, Edward

    2009-10-01

    The Storage Ring EDM Collaboration was using the Cooler Synchrotron (COSY) and the EDDA detector at the Forschungszentrum J"ulich to explore systematic errors in very sensitive storage-ring polarization measurements. Polarized deuterons of 235 MeV were used. The analyzer target was a block of 17 mm thick carbon placed close to the beam so that white noise applied to upstream electrostatic plates increases the vertical phase space of the beam, allowing deuterons to strike the front face of the block. For a detector acceptance that covers laboratory angles larger than 9 ^o, the efficiency for particles to scatter into the polarimeter detectors was about 0.1% (all directions) and the vector analyzing power was about 0.2. Measurements were made of the sensitivity of the polarization measurement to beam position and angle. Both vector and tensor asymmetries were measured using beams with both vector and tensor polarization. Effects were seen that depend upon both the beam geometry and the data rate in the detectors.

  20. Effectiveness of antibiotics given before admission in reducing mortality from meningococcal disease: systematic review.

    NARCIS (Netherlands)

    Hahné, Susan J M; Charlett, André; Purcell, Bernadette; Samuelsson, Susanne; Camaroni, Ivonne; Ehrhard, Ingrid; Heuberger, Sigrid; Santamaria, Maria; Stuart, James M

    2006-01-01

    OBJECTIVE: To review the evidence for effectiveness of treatment with antibiotics before admission in reducing case fatality from meningococcal disease. DESIGN: Systematic review. DATA SOURCES: Cochrane register of trials and systematic reviews, database of abstracts of reviews of effectiveness,

  1. An Integrated Signaling-Encryption Mechanism to Reduce Error Propagation in Wireless Communications: Performance Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Olama, Mohammed M [ORNL; Matalgah, Mustafa M [ORNL; Bobrek, Miljko [ORNL

    2015-01-01

    Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subject to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).

  2. Advancing the research agenda for diagnostic error reduction.

    Science.gov (United States)

    Zwaan, Laura; Schiff, Gordon D; Singh, Hardeep

    2013-10-01

    Diagnostic errors remain an underemphasised and understudied area of patient safety research. We briefly summarise the methods that have been used to conduct research on epidemiology, contributing factors and interventions related to diagnostic error and outline directions for future research. Research methods that have studied epidemiology of diagnostic error provide some estimate on diagnostic error rates. However, there appears to be a large variability in the reported rates due to the heterogeneity of definitions and study methods used. Thus, future methods should focus on obtaining more precise estimates in different settings of care. This would lay the foundation for measuring error rates over time to evaluate improvements. Research methods have studied contributing factors for diagnostic error in both naturalistic and experimental settings. Both approaches have revealed important and complementary information. Newer conceptual models from outside healthcare are needed to advance the depth and rigour of analysis of systems and cognitive insights of causes of error. While the literature has suggested many potentially fruitful interventions for reducing diagnostic errors, most have not been systematically evaluated and/or widely implemented in practice. Research is needed to study promising intervention areas such as enhanced patient involvement in diagnosis, improving diagnosis through the use of electronic tools and identification and reduction of specific diagnostic process 'pitfalls' (eg, failure to conduct appropriate diagnostic evaluation of a breast lump after a 'normal' mammogram). The last decade of research on diagnostic error has made promising steps and laid a foundation for more rigorous methods to advance the field.

  3. Addressing equity in interventions to reduce air pollution in urban areas: a systematic review.

    Science.gov (United States)

    Benmarhnia, Tarik; Rey, Lynda; Cartier, Yuri; Clary, Christelle M; Deguen, Séverine; Brousselle, Astrid

    2014-12-01

    We did a systematic review to assess quantitative studies investigating the association between interventions aiming to reduce air pollution, health benefits and equity effects. Three databases were searched for studies investigating the association between evaluated interventions aiming to reduce air pollution and heath-related benefits. We designed a two-stage selection process to judge how equity was assessed and we systematically determined if there was a heterogeneous effect of the intervention between subgroups or subareas. Of 145 identified articles, 54 were reviewed in-depth with eight satisfying the inclusion criteria. This systematic review showed that interventions aiming to reduce air pollution in urban areas have a positive impact on air quality and on mortality rates, but the documented effect on equity is less straightforward. Integration of equity in evidence-based public health is a great challenge nowadays. In this review we draw attention to the importance of considering equity in air pollution interventions. We also propose further methodological and theoretical challenges when assessing equity in interventions to reduce air pollution and we present opportunities to develop this research area.

  4. Supervised exercise reduces cancer-related fatigue: a systematic review

    Directory of Open Access Journals (Sweden)

    José F Meneses-Echávez

    2015-01-01

    Full Text Available Question: Does supervised physical activity reduce cancer-related fatigue? Design: Systematic review with meta-analysis of randomised trials. Participants: People diagnosed with any type of cancer, without restriction to a particular stage of diagnosis or treatment. Intervention: Supervised physical activity interventions (eg, aerobic, resistance and stretching exercise, defined as any planned or structured body movement causing an increase in energy expenditure, designed to maintain or enhance health-related outcomes, and performed with systematic frequency, intensity and duration. Outcome measures: The primary outcome measure was fatigue. Secondary outcomes were physical and functional wellbeing assessed using the Functional Assessment of Cancer Therapy Fatigue Scale, European Organisation for Research and Treatment of Cancer Quality of Life Questionnaire, Piper Fatigue Scale, Schwartz Cancer Fatigue Scale and the Multidimensional Fatigue Inventory. Methodological quality, including risk of bias of the studies, was evaluated using the PEDro Scale. Results: Eleven studies involving 1530 participants were included in the review. The assessment of quality showed a mean score of 6.5 (SD 1.1, indicating a low overall risk of bias. The pooled effect on fatigue, calculated as a standardised mean difference (SMD using a random-effects model, was –1.69 (95% CI –2.99 to –0.39. Beneficial reductions in fatigue were also found with combined aerobic and resistance training with supervision (SMD = –0.41, 95% CI –0.70 to –0.13 and with combined aerobic, resistance and stretching training with supervision (SMD = –0.67, 95% CI –1.17 to –0.17. Conclusion: Supervised physical activity interventions reduce cancer-related fatigue. These findings suggest that combined aerobic and resistance exercise regimens with or without stretching should be included as part of rehabilitation programs for people who have been diagnosed with cancer

  5. Measurement Error in Education and Growth Regressions

    NARCIS (Netherlands)

    Portela, Miguel; Alessie, Rob; Teulings, Coen

    2010-01-01

    The use of the perpetual inventory method for the construction of education data per country leads to systematic measurement error. This paper analyzes its effect on growth regressions. We suggest a methodology for correcting this error. The standard attenuation bias suggests that using these

  6. A New Human-Machine Interfaces of Computer-based Procedure System to Reduce the Team Errors in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Kim, Sa Kil; Sim, Joo Hyun; Lee, Hyun Chul

    2016-01-01

    In this study, we identify the emerging types of team errors, especially, in digitalized control room of nuclear power plants such as the APR-1400 main control room of Korea. Most works in nuclear industry are to be performed by a team of more than two persons. Even though the individual errors can be detected and recovered by the qualified others and/or the well trained team, it is rather seldom that the errors by team could be easily detected and properly recovered by the team itself. Note that the team is defined as two or more people who are appropriately interacting with each other, and the team is a dependent aggregate, which accomplishes a valuable goal. Organizational errors sometimes increase the likelihood of operator errors through the active failure pathway and, at the same time, enhance the possibility of adverse outcomes through defensive weaknesses. We incorporate the crew resource management as a representative approach to deal with the team factors of the human errors. We suggest a set of crew resource management training procedures under the unsafe environments where human errors can have devastating effects. We are on the way to develop alternative interfaces against team error in a condition of using a computer-based procedure system in a digitalized main control room. The computer-based procedure system is a representative feature of digitalized control room. In this study, we will propose new interfaces of computer-based procedure system to reduce feasible team errors. We are on the way of effectiveness test to validate whether the new interface can reduce team errors during operating with a computer-based procedure system in a digitalized control room

  7. A New Human-Machine Interfaces of Computer-based Procedure System to Reduce the Team Errors in Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sa Kil; Sim, Joo Hyun; Lee, Hyun Chul [Korea Atomic Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    In this study, we identify the emerging types of team errors, especially, in digitalized control room of nuclear power plants such as the APR-1400 main control room of Korea. Most works in nuclear industry are to be performed by a team of more than two persons. Even though the individual errors can be detected and recovered by the qualified others and/or the well trained team, it is rather seldom that the errors by team could be easily detected and properly recovered by the team itself. Note that the team is defined as two or more people who are appropriately interacting with each other, and the team is a dependent aggregate, which accomplishes a valuable goal. Organizational errors sometimes increase the likelihood of operator errors through the active failure pathway and, at the same time, enhance the possibility of adverse outcomes through defensive weaknesses. We incorporate the crew resource management as a representative approach to deal with the team factors of the human errors. We suggest a set of crew resource management training procedures under the unsafe environments where human errors can have devastating effects. We are on the way to develop alternative interfaces against team error in a condition of using a computer-based procedure system in a digitalized main control room. The computer-based procedure system is a representative feature of digitalized control room. In this study, we will propose new interfaces of computer-based procedure system to reduce feasible team errors. We are on the way of effectiveness test to validate whether the new interface can reduce team errors during operating with a computer-based procedure system in a digitalized control room.

  8. A new algorithm for reducing the workload of experts in performing systematic reviews.

    Science.gov (United States)

    Matwin, Stan; Kouznetsov, Alexandre; Inkpen, Diana; Frunza, Oana; O'Blenis, Peter

    2010-01-01

    To determine whether a factorized version of the complement naïve Bayes (FCNB) classifier can reduce the time spent by experts reviewing journal articles for inclusion in systematic reviews of drug class efficacy for disease treatment. The proposed classifier was evaluated on a test collection built from 15 systematic drug class reviews used in previous work. The FCNB classifier was constructed to classify each article as containing high-quality, drug class-specific evidence or not. Weight engineering (WE) techniques were added to reduce underestimation for Medical Subject Headings (MeSH)-based and Publication Type (PubType)-based features. Cross-validation experiments were performed to evaluate the classifier's parameters and performance. Work saved over sampling (WSS) at no less than a 95% recall was used as the main measure of performance. The minimum workload reduction for a systematic review for one topic, achieved with a FCNB/WE classifier, was 8.5%; the maximum was 62.2% and the average over the 15 topics was 33.5%. This is 15.0% higher than the average workload reduction obtained using a voting perceptron-based automated citation classification system. The FCNB/WE classifier is simple, easy to implement, and produces significantly better results in reducing the workload than previously achieved. The results support it being a useful algorithm for machine-learning-based automation of systematic reviews of drug class efficacy for disease treatment.

  9. Density-functional errors in ionization potential with increasing system size

    Energy Technology Data Exchange (ETDEWEB)

    Whittleton, Sarah R.; Sosa Vazquez, Xochitl A.; Isborn, Christine M., E-mail: cisborn@ucmerced.edu [Chemistry and Chemical Biology, School of Natural Sciences, University of California, Merced, 5200 North Lake Road, Merced, California 95343 (United States); Johnson, Erin R., E-mail: erin.johnson@dal.ca [Chemistry and Chemical Biology, School of Natural Sciences, University of California, Merced, 5200 North Lake Road, Merced, California 95343 (United States); Department of Chemistry, Dalhousie University, 6274 Coburg Road, Halifax, Nova Scotia B3H 4R2 (Canada)

    2015-05-14

    This work investigates the effects of molecular size on the accuracy of density-functional ionization potentials for a set of 28 hydrocarbons, including series of alkanes, alkenes, and oligoacenes. As the system size increases, delocalization error introduces a systematic underestimation of the ionization potential, which is rationalized by considering the fractional-charge behavior of the electronic energies. The computation of the ionization potential with many density-functional approximations is not size-extensive due to excessive delocalization of the incipient positive charge. While inclusion of exact exchange reduces the observed errors, system-specific tuning of long-range corrected functionals does not generally improve accuracy. These results emphasize that good performance of a functional for small molecules is not necessarily transferable to larger systems.

  10. Positioning errors assessed with kV cone-beam CT for image-guided prostate radiotherapy

    International Nuclear Information System (INIS)

    Li Jiongyan; Guo Xiaomao; Yao Weiqiang; Wang Yanyang; Ma Jinli; Chen Jiayi; Zhang Zhen; Feng Yan

    2010-01-01

    Objective: To assess set-up errors measured with kilovoltage cone-beam CT (KV-CBCT), and the impact of online corrections on margins required to account for set-up variability during IMRT for patients with prostate cancer. Methods: Seven patients with prostate cancer undergoing IMRT were enrolled onto the study. The KV-CBCT scans were acquired at least twice weekly. After initial set-up using the skin marks, a CBCT scan was acquired and registered with the planning CT to determine the setup errors using an auto grey-scale registration software. Corrections would be made by moving the table if the setup errors were considered clinically significant (i. e. , > 2 mm). A second CBCT scan was acquired immediately after the corrections to evaluate the residual error. PTV margins were derived to account for the measured set-up errors and residual errors determined for this group of patients. Results: 197 KV-CBCT images in total were acquired. The random and systematic positioning errors and calculated PTV margins without correction in mm were : a) Lateral 3.1, 2.1, 9.3; b) Longitudinal 1.5, 1.8, 5.1;c) Vertical 4.2, 3.7, 13.0. The random and systematic positioning errors and calculated PTV margin with correction in mm were : a) Lateral 1.1, 0.9, 3.4; b) Longitudinal 0.7, 1.1, 2.5; c) Vertical 1.1, 1.3, 3.7. Conclusions: With the guidance of online KV-CBCT, set-up errors could be reduced significantly for patients with prostate cancer receiving IMRT. The margin required after online CBCT correction for the patients enrolled in the study would be appoximatively 3-4 mm. (authors)

  11. Error tracking in a clinical biochemistry laboratory

    DEFF Research Database (Denmark)

    Szecsi, Pal Bela; Ødum, Lars

    2009-01-01

    BACKGROUND: We report our results for the systematic recording of all errors in a standard clinical laboratory over a 1-year period. METHODS: Recording was performed using a commercial database program. All individuals in the laboratory were allowed to report errors. The testing processes were cl...

  12. 75 FR 18514 - Developing Guidance on Naming, Labeling, and Packaging Practices to Reduce Medication Errors...

    Science.gov (United States)

    2010-04-12

    ... packaging designs. Among these measures, FDA agreed that by the end of FY 2010, after public consultation... product names and designing product labels and packaging to reduce medication errors. Four panel... of product packaging design, and costs associated with designing product packaging. Panel 3 will...

  13. Interventions for reducing medication errors in children in hospital

    NARCIS (Netherlands)

    Maaskant, Jolanda M; Vermeulen, Hester; Apampa, Bugewa; Fernando, Bernard; Ghaleb, Maisoon A; Neubert, Antje; Thayyil, Sudhin; Soe, Aung

    2015-01-01

    BACKGROUND: Many hospitalised patients are affected by medication errors (MEs) that may cause discomfort, harm and even death. Children are at especially high risk of harm as the result of MEs because such errors are potentially more hazardous to them than to adults. Until now, interventions to

  14. Interventions for reducing medication errors in children in hospital

    NARCIS (Netherlands)

    Maaskant, Jolanda M.; Vermeulen, Hester; Apampa, Bugewa; Fernando, Bernard; Ghaleb, Maisoon A.; Neubert, Antje; Thayyil, Sudhin; Soe, Aung

    2015-01-01

    Background Many hospitalised patients are affected by medication errors (MEs) that may cause discomfort, harm and even death. Children are at especially high risk of harm as the result of MEs because such errors are potentially more hazardous to them than to adults. Until now, interventions to

  15. THE DISKMASS SURVEY. II. ERROR BUDGET

    International Nuclear Information System (INIS)

    Bershady, Matthew A.; Westfall, Kyle B.; Verheijen, Marc A. W.; Martinsson, Thomas; Andersen, David R.; Swaters, Rob A.

    2010-01-01

    We present a performance analysis of the DiskMass Survey. The survey uses collisionless tracers in the form of disk stars to measure the surface density of spiral disks, to provide an absolute calibration of the stellar mass-to-light ratio (Υ * ), and to yield robust estimates of the dark-matter halo density profile in the inner regions of galaxies. We find that a disk inclination range of 25 0 -35 0 is optimal for our measurements, consistent with our survey design to select nearly face-on galaxies. Uncertainties in disk scale heights are significant, but can be estimated from radial scale lengths to 25% now, and more precisely in the future. We detail the spectroscopic analysis used to derive line-of-sight velocity dispersions, precise at low surface-brightness, and accurate in the presence of composite stellar populations. Our methods take full advantage of large-grasp integral-field spectroscopy and an extensive library of observed stars. We show that the baryon-to-total mass fraction (F bar ) is not a well-defined observational quantity because it is coupled to the halo mass model. This remains true even when the disk mass is known and spatially extended rotation curves are available. In contrast, the fraction of the rotation speed supplied by the disk at 2.2 scale lengths (disk maximality) is a robust observational indicator of the baryonic disk contribution to the potential. We construct the error budget for the key quantities: dynamical disk mass surface density (Σ dyn ), disk stellar mass-to-light ratio (Υ disk * ), and disk maximality (F *,max disk ≡V disk *,max / V c ). Random and systematic errors in these quantities for individual galaxies will be ∼25%, while survey precision for sample quartiles are reduced to 10%, largely devoid of systematic errors outside of distance uncertainties.

  16. The influence of random and systematic errors on a general definition of minimum detectable amount (MDA) applicable to all radiobioassay measurements

    International Nuclear Information System (INIS)

    Brodsky, A.

    1985-01-01

    An approach to defining minimum detectable amount (MDA) of radioactivity in a sample will be discussed, with the aim of obtaining comments helpful in developing a formulation of MDA that will be broadly applicable to all kinds of radiobioassay measurements, and acceptable to the scientists who make these measurements. Also, the influence of random and systematic errors on the defined MDA are examined

  17. Pencil kernel correction and residual error estimation for quality-index-based dose calculations

    International Nuclear Information System (INIS)

    Nyholm, Tufve; Olofsson, Joergen; Ahnesjoe, Anders; Georg, Dietmar; Karlsson, Mikael

    2006-01-01

    Experimental data from 593 photon beams were used to quantify the errors in dose calculations using a previously published pencil kernel model. A correction of the kernel was derived in order to remove the observed systematic errors. The remaining residual error for individual beams was modelled through uncertainty associated with the kernel model. The methods were tested against an independent set of measurements. No significant systematic error was observed in the calculations using the derived correction of the kernel and the remaining random errors were found to be adequately predicted by the proposed method

  18. Reducing Recreational Sedentary Screen Time: A Community Guide Systematic Review.

    Science.gov (United States)

    Ramsey Buchanan, Leigh; Rooks-Peck, Cherie R; Finnie, Ramona K C; Wethington, Holly R; Jacob, Verughese; Fulton, Janet E; Johnson, Donna B; Kahwati, Leila C; Pratt, Charlotte A; Ramirez, Gilbert; Mercer, Shawna L; Glanz, Karen

    2016-03-01

    Sedentary time spent with screen media is associated with obesity among children and adults. Obesity has potentially serious health consequences, such as heart disease and diabetes. This Community Guide systematic review examined the effectiveness and economic efficiency of behavioral interventions aimed at reducing recreational (i.e., neither school- nor work-related) sedentary screen time, as measured by screen time, physical activity, diet, and weight-related outcomes. For this review, an earlier ("original") review (search period, 1966 through July 2007) was combined with updated evidence (search period, April 2007 through June 2013) to assess effectiveness of behavioral interventions aimed at reducing recreational sedentary screen time. Existing Community Guide systematic review methods were used. Analyses were conducted in 2013-2014. The review included 49 studies. Two types of behavioral interventions were evaluated that either (1) focus on reducing recreational sedentary screen time only (12 studies); or (2) focus equally on reducing recreational sedentary screen time and improving physical activity or diet (37 studies). Most studies targeted children aged ≤13 years. Children's composite screen time (TV viewing plus other forms of recreational sedentary screen time) decreased 26.4 (interquartile interval= -74.4, -12.0) minutes/day and obesity prevalence decreased 2.3 (interquartile interval= -4.5, -1.2) percentage points versus a comparison group. Improvements in physical activity and diet were reported. Three study arms among adults found composite screen time decreased by 130.2 minutes/day. Among children, these interventions demonstrated reduced screen time, increased physical activity, and improved diet- and weight-related outcomes. More research is needed among adolescents and adults. Published by Elsevier Inc.

  19. Black hole spectroscopy: Systematic errors and ringdown energy estimates

    Science.gov (United States)

    Baibhav, Vishal; Berti, Emanuele; Cardoso, Vitor; Khanna, Gaurav

    2018-02-01

    The relaxation of a distorted black hole to its final state provides important tests of general relativity within the reach of current and upcoming gravitational wave facilities. In black hole perturbation theory, this phase consists of a simple linear superposition of exponentially damped sinusoids (the quasinormal modes) and of a power-law tail. How many quasinormal modes are necessary to describe waveforms with a prescribed precision? What error do we incur by only including quasinormal modes, and not tails? What other systematic effects are present in current state-of-the-art numerical waveforms? These issues, which are basic to testing fundamental physics with distorted black holes, have hardly been addressed in the literature. We use numerical relativity waveforms and accurate evolutions within black hole perturbation theory to provide some answers. We show that (i) a determination of the fundamental l =m =2 quasinormal frequencies and damping times to within 1% or better requires the inclusion of at least the first overtone, and preferably of the first two or three overtones; (ii) a determination of the black hole mass and spin with precision better than 1% requires the inclusion of at least two quasinormal modes for any given angular harmonic mode (ℓ , m ). We also improve on previous estimates and fits for the ringdown energy radiated in the various multipoles. These results are important to quantify theoretical (as opposed to instrumental) limits in parameter estimation accuracy and tests of general relativity allowed by ringdown measurements with high signal-to-noise ratio gravitational wave detectors.

  20. Strategies for reducing basis set superposition error (BSSE) in O/AU and O/Ni

    KAUST Repository

    Shuttleworth, I.G.

    2015-01-01

    © 2015 Elsevier Ltd. All rights reserved. The effect of basis set superposition error (BSSE) and effective strategies for the minimisation have been investigated using the SIESTA-LCAO DFT package. Variation of the energy shift parameter ΔEPAO has been shown to reduce BSSE for bulk Au and Ni and across their oxygenated surfaces. Alternative strategies based on either the expansion or contraction of the basis set have been shown to be ineffective in reducing BSSE. Comparison of the binding energies for the surface systems obtained using LCAO were compared with BSSE-free plane wave energies.

  1. Strategies for reducing basis set superposition error (BSSE) in O/AU and O/Ni

    KAUST Repository

    Shuttleworth, I.G.

    2015-11-01

    © 2015 Elsevier Ltd. All rights reserved. The effect of basis set superposition error (BSSE) and effective strategies for the minimisation have been investigated using the SIESTA-LCAO DFT package. Variation of the energy shift parameter ΔEPAO has been shown to reduce BSSE for bulk Au and Ni and across their oxygenated surfaces. Alternative strategies based on either the expansion or contraction of the basis set have been shown to be ineffective in reducing BSSE. Comparison of the binding energies for the surface systems obtained using LCAO were compared with BSSE-free plane wave energies.

  2. Three-dimensional patient setup errors at different treatment sites measured by the Tomotherapy megavoltage CT

    Energy Technology Data Exchange (ETDEWEB)

    Hui, S.K.; Lusczek, E.; Dusenbery, K. [Univ. of Minnesota Medical School, Minneapolis, MN (United States). Dept. of Therapeutic Radiology - Radiation Oncology; DeFor, T. [Univ. of Minnesota Medical School, Minneapolis, MN (United States). Biostatistics and Informatics Core; Levitt, S. [Univ. of Minnesota Medical School, Minneapolis, MN (United States). Dept. of Therapeutic Radiology - Radiation Oncology; Karolinska Institutet, Stockholm (Sweden). Dept. of Onkol-Patol

    2012-04-15

    Reduction of interfraction setup uncertainty is vital for assuring the accuracy of conformal radiotherapy. We report a systematic study of setup error to assess patients' three-dimensional (3D) localization at various treatment sites. Tomotherapy megavoltage CT (MVCT) images were scanned daily in 259 patients from 2005-2008. We analyzed 6,465 MVCT images to measure setup error for head and neck (H and N), chest/thorax, abdomen, prostate, legs, and total marrow irradiation (TMI). Statistical comparisons of the absolute displacements across sites and time were performed in rotation (R), lateral (x), craniocaudal (y), and vertical (z) directions. The global systematic errors were measured to be less than 3 mm in each direction with increasing order of errors for different sites: H and N, prostate, chest, pelvis, spine, legs, and TMI. The differences in displacements in the x, y, and z directions, and 3D average displacement between treatment sites were significant (p < 0.01). Overall improvement in patient localization with time (after 3-4 treatment fractions) was observed. Large displacement (> 5 mm) was observed in the 75{sup th} percentile of the patient groups for chest, pelvis, legs, and spine in the x and y direction in the second week of the treatment. MVCT imaging is essential for determining 3D setup error and to reduce uncertainty in localization at all anatomical locations. Setup error evaluation should be performed daily for all treatment regions, preferably for all treatment fractions. (orig.)

  3. Consequences of leaf calibration errors on IMRT delivery

    International Nuclear Information System (INIS)

    Sastre-Padro, M; Welleweerd, J; Malinen, E; Eilertsen, K; Olsen, D R; Heide, U A van der

    2007-01-01

    IMRT treatments using multi-leaf collimators may involve a large number of segments in order to spare the organs at risk. When a large proportion of these segments are small, leaf positioning errors may become relevant and have therapeutic consequences. The performance of four head and neck IMRT treatments under eight different cases of leaf positioning errors has been studied. Systematic leaf pair offset errors in the range of ±2.0 mm were introduced, thus modifying the segment sizes of the original IMRT plans. Thirty-six films were irradiated with the original and modified segments. The dose difference and the gamma index (with 2%/2 mm criteria) were used for evaluating the discrepancies between the irradiated films. The median dose differences were linearly related to the simulated leaf pair errors. In the worst case, a 2.0 mm error generated a median dose difference of 1.5%. Following the gamma analysis, two out of the 32 modified plans were not acceptable. In conclusion, small systematic leaf bank positioning errors have a measurable impact on the delivered dose and may have consequences for the therapeutic outcome of IMRT

  4. APPLICATION OF SIX SIGMA METHODOLOGY TO REDUCE MEDICATION ERRORS IN THE OUTPATIENT PHARMACY UNIT: A CASE STUDY FROM THE KING FAHD UNIVERSITY HOSPITAL, SAUDI ARABIA

    Directory of Open Access Journals (Sweden)

    Ahmed Al Kuwaiti

    2016-06-01

    Full Text Available Medication errors will affect the patient safety and quality of healthcare. The aim of this study is to analyze the effect of Six Sigma (DMAIC methodology in reducing medication errors in the outpatient pharmacy of King Fahd Hospital of the University, Saudi Arabia. It was conducted through the five phases of Define, Measure, Analyze, Improve, Control (DMAIC model using various quality tools. The goal was fixed as to reduce medication errors in an outpatient pharmacy by 20%. After implementation of improvement strategies, there was a marked reduction of defects and also improvement of their sigma rating. Especially, Parts per million (PPM of prescription/data entry errors reduced from 56,000 to 5,000 and its sigma rating improved from 3.09 to 4.08. This study concluded that the Six Sigma (DMAIC methodology is found to be more significant in reducing medication errors and ensuring patient safety.

  5. FMEA: a model for reducing medical errors.

    Science.gov (United States)

    Chiozza, Maria Laura; Ponzetti, Clemente

    2009-06-01

    Patient safety is a management issue, in view of the fact that clinical risk management has become an important part of hospital management. Failure Mode and Effect Analysis (FMEA) is a proactive technique for error detection and reduction, firstly introduced within the aerospace industry in the 1960s. Early applications in the health care industry dating back to the 1990s included critical systems in the development and manufacture of drugs and in the prevention of medication errors in hospitals. In 2008, the Technical Committee of the International Organization for Standardization (ISO), licensed a technical specification for medical laboratories suggesting FMEA as a method for prospective risk analysis of high-risk processes. Here we describe the main steps of the FMEA process and review data available on the application of this technique to laboratory medicine. A significant reduction of the risk priority number (RPN) was obtained when applying FMEA to blood cross-matching, to clinical chemistry analytes, as well as to point-of-care testing (POCT).

  6. Hepatic glucose output in humans measured with labeled glucose to reduce negative errors

    International Nuclear Information System (INIS)

    Levy, J.C.; Brown, G.; Matthews, D.R.; Turner, R.C.

    1989-01-01

    Steele and others have suggested that minimizing changes in glucose specific activity when estimating hepatic glucose output (HGO) during glucose infusions could reduce non-steady-state errors. This approach was assessed in nondiabetic and type II diabetic subjects during constant low dose [27 mumol.kg ideal body wt (IBW)-1.min-1] glucose infusion followed by a 12 mmol/l hyperglycemic clamp. Eight subjects had paired tests with and without labeled infusions. Labeled infusion was used to compare HGO in 11 nondiabetic and 15 diabetic subjects. Whereas unlabeled infusions produced negative values for endogenous glucose output, labeled infusions largely eliminated this error and reduced the dependence of the Steele model on the pool fraction in the paired tests. By use of labeled infusions, 11 nondiabetic subjects suppressed HGO from 10.2 +/- 0.6 (SE) fasting to 0.8 +/- 0.9 mumol.kg IBW-1.min-1 after 90 min of glucose infusion and to -1.9 +/- 0.5 mumol.kg IBW-1.min-1 after 90 min of a 12 mmol/l glucose clamp, but 15 diabetic subjects suppressed only partially from 13.0 +/- 0.9 fasting to 5.7 +/- 1.2 at the end of the glucose infusion and 5.6 +/- 1.0 mumol.kg IBW-1.min-1 in the clamp (P = 0.02, 0.002, and less than 0.001, respectively)

  7. Improved compliance with the World Health Organization Surgical Safety Checklist is associated with reduced surgical specimen labelling errors.

    Science.gov (United States)

    Martis, Walston R; Hannam, Jacqueline A; Lee, Tracey; Merry, Alan F; Mitchell, Simon J

    2016-09-09

    A new approach to administering the surgical safety checklist (SSC) at our institution using wall-mounted charts for each SSC domain coupled with migrated leadership among operating room (OR) sub-teams, led to improved compliance with the Sign Out domain. Since surgical specimens are reviewed at Sign Out, we aimed to quantify any related change in surgical specimen labelling errors. Prospectively maintained error logs for surgical specimens sent to pathology were examined for the six months before and after introduction of the new SSC administration paradigm. We recorded errors made in the labelling or completion of the specimen pot and on the specimen laboratory request form. Total error rates were calculated from the number of errors divided by total number of specimens. Rates from the two periods were compared using a chi square test. There were 19 errors in 4,760 specimens (rate 3.99/1,000) and eight errors in 5,065 specimens (rate 1.58/1,000) before and after the change in SSC administration paradigm (P=0.0225). Improved compliance with administering the Sign Out domain of the SSC can reduce surgical specimen errors. This finding provides further evidence that OR teams should optimise compliance with the SSC.

  8. The error analysis of field size variation in pelvis region by using immobilization device

    International Nuclear Information System (INIS)

    Kim, Ki Hwan; Kang, No Hyun; Kim, Dong Wuk; Kim, Jun Sang; Jang, Ji Young; Kim, Jae Sung; Kim, Yong Eun; Cho, Moon June

    2000-01-01

    In radiotherapy, it may happen to radiate surrounding normal tissue because of inconsistent field size by changing patient position during treatment. We are going to analyze errors reduced by using immobilization device with Electronic Portal Imaging Device(EPID) in this study. We had treated the twenty-one patients in pelvic region with 10 MV X-ray from Aug. 1998 to Aug. 1999 at chungnam National University Hospital. All patients were treated at supine position during treatment. They were separated to two groups, 11 patients without device and 10 patients with immobilization device. We used styrofoam for immobilization device and measured the error of anterior direction for x, y axis and lateral direction for z, y axis from simulation film to EPID image using matching technique. For no immobilization device group, the mean deviation values of x axis and y axis are 0.19 mm. 0.48 mm, respectively and the standard deviations of systematic deviation are 2.38 mm, 2.19 mm, respectively and of random deviation for x axis and y axis are 1.92 mm. 1.29 mm, respectively. The mean deviation values of z axis and y axis are -3.61 mm. 2.07 mm, respectively and the standard deviations of systematic deviation are 3.20 mm, 2.29 mm, respectively and of random deviation for z axis and y axis are 2.73 mm. 1.62 mm, respectively. For immobilization device group, the mean deviation values of x axis and y axis are 0.71 mm. -1.07 mm, respectively and the standard deviations of systematic deviation are 1.80 mm, 2.26 mm, respectively and the standard deviations of systematic deviation are 1.80 mm, 2.26 mm, respectively of random deviation for x axis and y axis are 1.56 mm. 1.27 mm, respectively. The mean deviation values of z axis and y axis are -1.76 mm. 1.08 mm, respectively and the standard deviations of systematic deviation are 1.87 mm, 2.83 mm, respectively and of random deviation for x axis and y axis are 1.68 mm, 1.65 mm, respectively. Because of reducing random and systematic error

  9. Technical Note: Interference errors in infrared remote sounding of the atmosphere

    Directory of Open Access Journals (Sweden)

    R. Sussmann

    2007-07-01

    Full Text Available Classical error analysis in remote sounding distinguishes between four classes: "smoothing errors," "model parameter errors," "forward model errors," and "retrieval noise errors". For infrared sounding "interference errors", which, in general, cannot be described by these four terms, can be significant. Interference errors originate from spectral residuals due to "interfering species" whose spectral features overlap with the signatures of the target species. A general method for quantification of interference errors is presented, which covers all possible algorithmic implementations, i.e., fine-grid retrievals of the interfering species or coarse-grid retrievals, and cases where the interfering species are not retrieved. In classical retrieval setups interference errors can exceed smoothing errors and can vary by orders of magnitude due to state dependency. An optimum strategy is suggested which practically eliminates interference errors by systematically minimizing the regularization strength applied to joint profile retrieval of the interfering species. This leads to an interfering-species selective deweighting of the retrieval. Details of microwindow selection are no longer critical for this optimum retrieval and widened microwindows even lead to reduced overall (smoothing and interference errors. Since computational power will increase, more and more operational algorithms will be able to utilize this optimum strategy in the future. The findings of this paper can be applied to soundings of all infrared-active atmospheric species, which include more than two dozen different gases relevant to climate and ozone. This holds for all kinds of infrared remote sounding systems, i.e., retrievals from ground-based, balloon-borne, airborne, or satellite spectroradiometers.

  10. Understanding human management of automation errors

    Science.gov (United States)

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042

  11. A new calibration model for pointing a radio telescope that considers nonlinear errors in the azimuth axis

    International Nuclear Information System (INIS)

    Kong De-Qing; Wang Song-Gen; Zhang Hong-Bo; Wang Jin-Qing; Wang Min

    2014-01-01

    A new calibration model of a radio telescope that includes pointing error is presented, which considers nonlinear errors in the azimuth axis. For a large radio telescope, in particular for a telescope with a turntable, it is difficult to correct pointing errors using a traditional linear calibration model, because errors produced by the wheel-on-rail or center bearing structures are generally nonlinear. Fourier expansion is made for the oblique error and parameters describing the inclination direction along the azimuth axis based on the linear calibration model, and a new calibration model for pointing is derived. The new pointing model is applied to the 40m radio telescope administered by Yunnan Observatories, which is a telescope that uses a turntable. The results show that this model can significantly reduce the residual systematic errors due to nonlinearity in the azimuth axis compared with the linear model

  12. A Systems Modeling Approach for Risk Management of Command File Errors

    Science.gov (United States)

    Meshkat, Leila

    2012-01-01

    The main cause of commanding errors is often (but not always) due to procedures. Either lack of maturity in the processes, incompleteness of requirements or lack of compliance to these procedures. Other causes of commanding errors include lack of understanding of system states, inadequate communication, and making hasty changes in standard procedures in response to an unexpected event. In general, it's important to look at the big picture prior to making corrective actions. In the case of errors traced back to procedures, considering the reliability of the process as a metric during its' design may help to reduce risk. This metric is obtained by using data from Nuclear Industry regarding human reliability. A structured method for the collection of anomaly data will help the operator think systematically about the anomaly and facilitate risk management. Formal models can be used for risk based design and risk management. A generic set of models can be customized for a broad range of missions.

  13. Evaluating a medical error taxonomy.

    OpenAIRE

    Brixey, Juliana; Johnson, Todd R.; Zhang, Jiajie

    2002-01-01

    Healthcare has been slow in using human factors principles to reduce medical errors. The Center for Devices and Radiological Health (CDRH) recognizes that a lack of attention to human factors during product development may lead to errors that have the potential for patient injury, or even death. In response to the need for reducing medication errors, the National Coordinating Council for Medication Errors Reporting and Prevention (NCC MERP) released the NCC MERP taxonomy that provides a stand...

  14. The DiskMass Survey. II. Error Budget

    Science.gov (United States)

    Bershady, Matthew A.; Verheijen, Marc A. W.; Westfall, Kyle B.; Andersen, David R.; Swaters, Rob A.; Martinsson, Thomas

    2010-06-01

    We present a performance analysis of the DiskMass Survey. The survey uses collisionless tracers in the form of disk stars to measure the surface density of spiral disks, to provide an absolute calibration of the stellar mass-to-light ratio (Υ_{*}), and to yield robust estimates of the dark-matter halo density profile in the inner regions of galaxies. We find that a disk inclination range of 25°-35° is optimal for our measurements, consistent with our survey design to select nearly face-on galaxies. Uncertainties in disk scale heights are significant, but can be estimated from radial scale lengths to 25% now, and more precisely in the future. We detail the spectroscopic analysis used to derive line-of-sight velocity dispersions, precise at low surface-brightness, and accurate in the presence of composite stellar populations. Our methods take full advantage of large-grasp integral-field spectroscopy and an extensive library of observed stars. We show that the baryon-to-total mass fraction ({F}_bar) is not a well-defined observational quantity because it is coupled to the halo mass model. This remains true even when the disk mass is known and spatially extended rotation curves are available. In contrast, the fraction of the rotation speed supplied by the disk at 2.2 scale lengths (disk maximality) is a robust observational indicator of the baryonic disk contribution to the potential. We construct the error budget for the key quantities: dynamical disk mass surface density (Σdyn), disk stellar mass-to-light ratio (Υ^disk_{*}), and disk maximality ({F}_{*,max}^disk≡ V^disk_{*,max}/ V_c). Random and systematic errors in these quantities for individual galaxies will be ~25%, while survey precision for sample quartiles are reduced to 10%, largely devoid of systematic errors outside of distance uncertainties.

  15. Impact of residual and intrafractional errors on strategy of correction for image-guided accelerated partial breast irradiation

    Directory of Open Access Journals (Sweden)

    Guo Xiao-Mao

    2010-10-01

    Full Text Available Abstract Background The cone beam CT (CBCT guided radiation can reduce the systematic and random setup errors as compared to the skin-mark setup. However, the residual and intrafractional (RAIF errors are still unknown. The purpose of this paper is to investigate the magnitude of RAIF errors and correction action levels needed in cone beam computed tomography (CBCT guided accelerated partial breast irradiation (APBI. Methods Ten patients were enrolled in the prospective study of CBCT guided APBI. The postoperative tumor bed was irradiated with 38.5 Gy in 10 fractions over 5 days. Two cone-beam CT data sets were obtained with one before and one after the treatment delivery. The CBCT images were registered online to the planning CT images using the automatic algorithm followed by a fine manual adjustment. An action level of 3 mm, meaning that corrections were performed for translations exceeding 3 mm, was implemented in clinical treatments. Based on the acquired data, different correction action levels were simulated, and random RAIF errors, systematic RAIF errors and related margins before and after the treatments were determined for varying correction action levels. Results A total of 75 pairs of CBCT data sets were analyzed. The systematic and random setup errors based on skin-mark setup prior to treatment delivery were 2.1 mm and 1.8 mm in the lateral (LR, 3.1 mm and 2.3 mm in the superior-inferior (SI, and 2.3 mm and 2.0 mm in the anterior-posterior (AP directions. With the 3 mm correction action level, the systematic and random RAIF errors were 2.5 mm and 2.3 mm in the LR direction, 2.3 mm and 2.3 mm in the SI direction, and 2.3 mm and 2.2 mm in the AP direction after treatments delivery. Accordingly, the margins for correction action levels of 3 mm, 4 mm, 5 mm, 6 mm and no correction were 7.9 mm, 8.0 mm, 8.0 mm, 7.9 mm and 8.0 mm in the LR direction; 6.4 mm, 7.1 mm, 7.9 mm, 9.2 mm and 10.5 mm in the SI direction; 7.6 mm, 7.9 mm, 9.4 mm, 10

  16. Impact of residual and intrafractional errors on strategy of correction for image-guided accelerated partial breast irradiation

    International Nuclear Information System (INIS)

    Cai, Gang; Hu, Wei-Gang; Chen, Jia-Yi; Yu, Xiao-Li; Pan, Zi-Qiang; Yang, Zhao-Zhi; Guo, Xiao-Mao; Shao, Zhi-Min; Jiang, Guo-Liang

    2010-01-01

    The cone beam CT (CBCT) guided radiation can reduce the systematic and random setup errors as compared to the skin-mark setup. However, the residual and intrafractional (RAIF) errors are still unknown. The purpose of this paper is to investigate the magnitude of RAIF errors and correction action levels needed in cone beam computed tomography (CBCT) guided accelerated partial breast irradiation (APBI). Ten patients were enrolled in the prospective study of CBCT guided APBI. The postoperative tumor bed was irradiated with 38.5 Gy in 10 fractions over 5 days. Two cone-beam CT data sets were obtained with one before and one after the treatment delivery. The CBCT images were registered online to the planning CT images using the automatic algorithm followed by a fine manual adjustment. An action level of 3 mm, meaning that corrections were performed for translations exceeding 3 mm, was implemented in clinical treatments. Based on the acquired data, different correction action levels were simulated, and random RAIF errors, systematic RAIF errors and related margins before and after the treatments were determined for varying correction action levels. A total of 75 pairs of CBCT data sets were analyzed. The systematic and random setup errors based on skin-mark setup prior to treatment delivery were 2.1 mm and 1.8 mm in the lateral (LR), 3.1 mm and 2.3 mm in the superior-inferior (SI), and 2.3 mm and 2.0 mm in the anterior-posterior (AP) directions. With the 3 mm correction action level, the systematic and random RAIF errors were 2.5 mm and 2.3 mm in the LR direction, 2.3 mm and 2.3 mm in the SI direction, and 2.3 mm and 2.2 mm in the AP direction after treatments delivery. Accordingly, the margins for correction action levels of 3 mm, 4 mm, 5 mm, 6 mm and no correction were 7.9 mm, 8.0 mm, 8.0 mm, 7.9 mm and 8.0 mm in the LR direction; 6.4 mm, 7.1 mm, 7.9 mm, 9.2 mm and 10.5 mm in the SI direction; 7.6 mm, 7.9 mm, 9.4 mm, 10.1 mm and 12.7 mm in the AP direction

  17. Force Reproduction Error Depends on Force Level, whereas the Position Reproduction Error Does Not

    NARCIS (Netherlands)

    Onneweer, B.; Mugge, W.; Schouten, Alfred Christiaan

    2016-01-01

    When reproducing a previously perceived force or position humans make systematic errors. This study determined the effect of force level on force and position reproduction, when both target and reproduction force are self-generated with the same hand. Subjects performed force reproduction tasks at

  18. A method for optical ground station reduce alignment error in satellite-ground quantum experiments

    Science.gov (United States)

    He, Dong; Wang, Qiang; Zhou, Jian-Wei; Song, Zhi-Jun; Zhong, Dai-Jun; Jiang, Yu; Liu, Wan-Sheng; Huang, Yong-Mei

    2018-03-01

    A satellite dedicated for quantum science experiments, has been developed and successfully launched from Jiuquan, China, on August 16, 2016. Two new optical ground stations (OGSs) were built to cooperate with the satellite to complete satellite-ground quantum experiments. OGS corrected its pointing direction by satellite trajectory error to coarse tracking system and uplink beacon sight, therefore fine tracking CCD and uplink beacon optical axis alignment accuracy was to ensure that beacon could cover the quantum satellite in all time when it passed the OGSs. Unfortunately, when we tested specifications of the OGSs, due to the coarse tracking optical system was commercial telescopes, the change of position of the target in the coarse CCD was up to 600μrad along with the change of elevation angle. In this paper, a method of reduce alignment error between beacon beam and fine tracking CCD is proposed. Firstly, OGS fitted the curve of target positions in coarse CCD along with the change of elevation angle. Secondly, OGS fitted the curve of hexapod secondary mirror positions along with the change of elevation angle. Thirdly, when tracking satellite, the fine tracking error unloaded on the real-time zero point position of coarse CCD which computed by the firstly calibration data. Simultaneously the positions of the hexapod secondary mirror were adjusted by the secondly calibration data. Finally the experiment result is proposed. Results show that the alignment error is less than 50μrad.

  19. Errors in clinical laboratories or errors in laboratory medicine?

    Science.gov (United States)

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  20. Errors in otology.

    Science.gov (United States)

    Kartush, J M

    1996-11-01

    Practicing medicine successfully requires that errors in diagnosis and treatment be minimized. Malpractice laws encourage litigators to ascribe all medical errors to incompetence and negligence. There are, however, many other causes of unintended outcomes. This article describes common causes of errors and suggests ways to minimize mistakes in otologic practice. Widespread dissemination of knowledge about common errors and their precursors can reduce the incidence of their occurrence. Consequently, laws should be passed to allow for a system of non-punitive, confidential reporting of errors and "near misses" that can be shared by physicians nationwide.

  1. Model parameter-related optimal perturbations and their contributions to El Niño prediction errors

    Science.gov (United States)

    Tao, Ling-Jiang; Gao, Chuan; Zhang, Rong-Hua

    2018-04-01

    Errors in initial conditions and model parameters (MPs) are the main sources that limit the accuracy of ENSO predictions. In addition to exploring the initial error-induced prediction errors, model errors are equally important in determining prediction performance. In this paper, the MP-related optimal errors that can cause prominent error growth in ENSO predictions are investigated using an intermediate coupled model (ICM) and a conditional nonlinear optimal perturbation (CNOP) approach. Two MPs related to the Bjerknes feedback are considered in the CNOP analysis: one involves the SST-surface wind coupling ({α _τ } ), and the other involves the thermocline effect on the SST ({α _{Te}} ). The MP-related optimal perturbations (denoted as CNOP-P) are found uniformly positive and restrained in a small region: the {α _τ } component is mainly concentrated in the central equatorial Pacific, and the {α _{Te}} component is mainly located in the eastern cold tongue region. This kind of CNOP-P enhances the strength of the Bjerknes feedback and induces an El Niño- or La Niña-like error evolution, resulting in an El Niño-like systematic bias in this model. The CNOP-P is also found to play a role in the spring predictability barrier (SPB) for ENSO predictions. Evidently, such error growth is primarily attributed to MP errors in small areas based on the localized distribution of CNOP-P. Further sensitivity experiments firmly indicate that ENSO simulations are sensitive to the representation of SST-surface wind coupling in the central Pacific and to the thermocline effect in the eastern Pacific in the ICM. These results provide guidance and theoretical support for the future improvement in numerical models to reduce the systematic bias and SPB phenomenon in ENSO predictions.

  2. Determination of fission products and actinides by inductively coupled plasma-mass spectrometry using isotope dilution analysis. A study of random and systematic errors

    International Nuclear Information System (INIS)

    Ignacio Garcia Alonso, Jose

    1995-01-01

    The theory of the propagation of errors (random and systematic) for isotope dilution analysis (IDA) has been applied to the analysis of fission products and actinide elements by inductively coupled plasma-mass spectrometry (ICP-MS). Systematic errors in ID-ICP-MS arising from mass-discrimination (mass bias), detector non-linearity and isobaric interferences in the measured isotopes have to be corrected for in order to achieve accurate results. The mass bias factor and the detector dead-time can be determined by using natural elements with well-defined isotope abundances. A combined method for the simultaneous determination of both factors is proposed. On the other hand, isobaric interferences for some fission products and actinides cannot be eliminated using mathematical corrections (due to the unknown isotope abundances in the sample) and a chemical separation is necessary. The theory for random error propagation in IDA has been applied to the determination of non-natural elements by ICP-MS taking into account all possible sources of uncertainty with pulse counting detection. For the analysis of fission products, the selection of the right spike isotope composition and spike to sample ratio can be performed by applying conventional random propagation theory. However, it has been observed that, in the experimental determination of the isotope abundances of the fission product elements to be determined, the correction for mass-discrimination and the correction for detector dead-time losses contribute to the total random uncertainty. For the instrument used in the experimental part of this study, it was found that the random uncertainty on the measured isotope ratios followed Poisson statistics for low counting rates whereas, for high counting rates, source instability was the main source of error

  3. A method for reducing memory errors in the isotopic analyses of uranium hexafluoride by mass spectrometry; Methode de reduction des erreurs de memoire dans les analyses isotopiques de l'hexafluorure d'uranium par spectrometrie de masse

    Energy Technology Data Exchange (ETDEWEB)

    Bir, R [Commissariat a l' Energie Atomique, Saclay (France).Centre d' Etudes Nucleaires

    1961-07-01

    One of the most serious causes of systematic error in isotopic analyses of uranium from UF{sub 6} is the tendency of this material to become fixed in various ways in the mass spectrometer. As a result the value indicated by the instrument is influenced by the isotopic composition of the substances previously analysed. The resulting error is called a memory error. Making use of an elementary mathematical theory, the various methods used to reduce memory errors are analysed and compared. A new method is then suggested, which reduces the memory errors to an extent where they become negligible over a wide range of {sup 235}U concentration. The method is given in full, together with examples of its application. (author) [French] Une des causes d'erreurs systematiques les plus graves dans les analyses isotopiques d'uranium a partir d'UF{sub 6} est l'aptitude de ce produit a se fixer de diverses manieres dans le spectrometre de masse. Il en resulte une influence de la composition isotopique des produits precedemment analyses sur la valeur indiquee par l'appareil. L'erreur resultante est appelee erreur de memoire. A partir d'une theorie mathematique elementaire, on analyse et on compare les differentes methodes utilisees pour reduire les erreurs de memoire. On suggere ensuite une nouvelle methode qui reduit les erreurs de memoire dans une proportion telle qu'elles deviennent negligeables dans un grand domaine de concentration en {sup 235}U. On donne le mode operatoire complet et des exemples d'application. (auteur)

  4. SU-D-BRA-03: Analysis of Systematic Errors with 2D/3D Image Registration for Target Localization and Treatment Delivery in Stereotactic Radiosurgery

    International Nuclear Information System (INIS)

    Xu, H; Chetty, I; Wen, N

    2016-01-01

    Purpose: Determine systematic deviations between 2D/3D and 3D/3D image registrations with six degrees of freedom (6DOF) for various imaging modalities and registration algorithms on the Varian Edge Linac. Methods: The 6DOF systematic errors were assessed by comparing automated 2D/3D (kV/MV vs. CT) with 3D/3D (CBCT vs. CT) image registrations from different imaging pairs, CT slice thicknesses, couch angles, similarity measures, etc., using a Rando head and a pelvic phantom. The 2D/3D image registration accuracy was evaluated at different treatment sites (intra-cranial and extra-cranial) by statistically analyzing 2D/3D pre-treatment verification against 3D/3D localization of 192 Stereotactic Radiosurgery/Stereotactic Body Radiation Therapy treatment fractions for 88 patients. Results: The systematic errors of 2D/3D image registration using kV-kV, MV-kV and MV-MV image pairs using 0.8 mm slice thickness CT images were within 0.3 mm and 0.3° for translations and rotations with a 95% confidence interval (CI). No significant difference between 2D/3D and 3D/3D image registrations (P>0.05) was observed for target localization at various CT slice thicknesses ranging from 0.8 to 3 mm. Couch angles (30, 45, 60 degree) did not impact the accuracy of 2D/3D image registration. Using pattern intensity with content image filtering was recommended for 2D/3D image registration to achieve the best accuracy. For the patient study, translational error was within 2 mm and rotational error was within 0.6 degrees in terms of 95% CI for 2D/3D image registration. For intra-cranial sites, means and std. deviations of translational errors were −0.2±0.7, 0.04±0.5, 0.1±0.4 mm for LNG, LAT, VRT directions, respectively. For extra-cranial sites, means and std. deviations of translational errors were - 0.04±1, 0.2±1, 0.1±1 mm for LNG, LAT, VRT directions, respectively. 2D/3D image registration uncertainties for intra-cranial and extra-cranial sites were comparable. Conclusion: The Varian

  5. SU-D-BRA-03: Analysis of Systematic Errors with 2D/3D Image Registration for Target Localization and Treatment Delivery in Stereotactic Radiosurgery

    Energy Technology Data Exchange (ETDEWEB)

    Xu, H [Wayne State University, Detroit, MI (United States); Chetty, I; Wen, N [Henry Ford Health System, Detroit, MI (United States)

    2016-06-15

    Purpose: Determine systematic deviations between 2D/3D and 3D/3D image registrations with six degrees of freedom (6DOF) for various imaging modalities and registration algorithms on the Varian Edge Linac. Methods: The 6DOF systematic errors were assessed by comparing automated 2D/3D (kV/MV vs. CT) with 3D/3D (CBCT vs. CT) image registrations from different imaging pairs, CT slice thicknesses, couch angles, similarity measures, etc., using a Rando head and a pelvic phantom. The 2D/3D image registration accuracy was evaluated at different treatment sites (intra-cranial and extra-cranial) by statistically analyzing 2D/3D pre-treatment verification against 3D/3D localization of 192 Stereotactic Radiosurgery/Stereotactic Body Radiation Therapy treatment fractions for 88 patients. Results: The systematic errors of 2D/3D image registration using kV-kV, MV-kV and MV-MV image pairs using 0.8 mm slice thickness CT images were within 0.3 mm and 0.3° for translations and rotations with a 95% confidence interval (CI). No significant difference between 2D/3D and 3D/3D image registrations (P>0.05) was observed for target localization at various CT slice thicknesses ranging from 0.8 to 3 mm. Couch angles (30, 45, 60 degree) did not impact the accuracy of 2D/3D image registration. Using pattern intensity with content image filtering was recommended for 2D/3D image registration to achieve the best accuracy. For the patient study, translational error was within 2 mm and rotational error was within 0.6 degrees in terms of 95% CI for 2D/3D image registration. For intra-cranial sites, means and std. deviations of translational errors were −0.2±0.7, 0.04±0.5, 0.1±0.4 mm for LNG, LAT, VRT directions, respectively. For extra-cranial sites, means and std. deviations of translational errors were - 0.04±1, 0.2±1, 0.1±1 mm for LNG, LAT, VRT directions, respectively. 2D/3D image registration uncertainties for intra-cranial and extra-cranial sites were comparable. Conclusion: The Varian

  6. Distance error correction for time-of-flight cameras

    Science.gov (United States)

    Fuersattel, Peter; Schaller, Christian; Maier, Andreas; Riess, Christian

    2017-06-01

    The measurement accuracy of time-of-flight cameras is limited due to properties of the scene and systematic errors. These errors can accumulate to multiple centimeters which may limit the applicability of these range sensors. In the past, different approaches have been proposed for improving the accuracy of these cameras. In this work, we propose a new method that improves two important aspects of the range calibration. First, we propose a new checkerboard which is augmented by a gray-level gradient. With this addition it becomes possible to capture the calibration features for intrinsic and distance calibration at the same time. The gradient strip allows to acquire a large amount of distance measurements for different surface reflectivities, which results in more meaningful training data. Second, we present multiple new features which are used as input to a random forest regressor. By using random regression forests, we circumvent the problem of finding an accurate model for the measurement error. During application, a correction value for each individual pixel is estimated with the trained forest based on a specifically tailored feature vector. With our approach the measurement error can be reduced by more than 40% for the Mesa SR4000 and by more than 30% for the Microsoft Kinect V2. In our evaluation we also investigate the impact of the individual forest parameters and illustrate the importance of the individual features.

  7. Analysis of field errors in existing undulators

    International Nuclear Information System (INIS)

    Kincaid, B.M.

    1990-01-01

    The Advanced Light Source (ALS) and other third generation synchrotron light sources have been designed for optimum performance with undulator insertion devices. The performance requirements for these new undulators are explored, with emphasis on the effects of errors on source spectral brightness. Analysis of magnetic field data for several existing hybrid undulators is presented, decomposing errors into systematic and random components. An attempts is made to identify the sources of these errors, and recommendations are made for designing future insertion devices. 12 refs., 16 figs

  8. Effects of systematic sampling on satellite estimates of deforestation rates

    International Nuclear Information System (INIS)

    Steininger, M K; Godoy, F; Harper, G

    2009-01-01

    Options for satellite monitoring of deforestation rates over large areas include the use of sampling. Sampling may reduce the cost of monitoring but is also a source of error in estimates of areas and rates. A common sampling approach is systematic sampling, in which sample units of a constant size are distributed in some regular manner, such as a grid. The proposed approach for the 2010 Forest Resources Assessment (FRA) of the UN Food and Agriculture Organization (FAO) is a systematic sample of 10 km wide squares at every 1 deg. intersection of latitude and longitude. We assessed the outcome of this and other systematic samples for estimating deforestation at national, sub-national and continental levels. The study is based on digital data on deforestation patterns for the five Amazonian countries outside Brazil plus the Brazilian Amazon. We tested these schemes by varying sample-unit size and frequency. We calculated two estimates of sampling error. First we calculated the standard errors, based on the size, variance and covariance of the samples, and from this calculated the 95% confidence intervals (CI). Second, we calculated the actual errors, based on the difference between the sample-based estimates and the estimates from the full-coverage maps. At the continental level, the 1 deg., 10 km scheme had a CI of 21% and an actual error of 8%. At the national level, this scheme had CIs of 126% for Ecuador and up to 67% for other countries. At this level, increasing sampling density to every 0.25 deg. produced a CI of 32% for Ecuador and CIs of up to 25% for other countries, with only Brazil having a CI of less than 10%. Actual errors were within the limits of the CIs in all but two of the 56 cases. Actual errors were half or less of the CIs in all but eight of these cases. These results indicate that the FRA 2010 should have CIs of smaller than or close to 10% at the continental level. However, systematic sampling at the national level yields large CIs unless the

  9. Probabilistic linkage to enhance deterministic algorithms and reduce data linkage errors in hospital administrative data.

    Science.gov (United States)

    Hagger-Johnson, Gareth; Harron, Katie; Goldstein, Harvey; Aldridge, Robert; Gilbert, Ruth

    2017-06-30

     BACKGROUND: The pseudonymisation algorithm used to link together episodes of care belonging to the same patients in England (HESID) has never undergone any formal evaluation, to determine the extent of data linkage error. To quantify improvements in linkage accuracy from adding probabilistic linkage to existing deterministic HESID algorithms. Inpatient admissions to NHS hospitals in England (Hospital Episode Statistics, HES) over 17 years (1998 to 2015) for a sample of patients (born 13/28th of months in 1992/1998/2005/2012). We compared the existing deterministic algorithm with one that included an additional probabilistic step, in relation to a reference standard created using enhanced probabilistic matching with additional clinical and demographic information. Missed and false matches were quantified and the impact on estimates of hospital readmission within one year were determined. HESID produced a high missed match rate, improving over time (8.6% in 1998 to 0.4% in 2015). Missed matches were more common for ethnic minorities, those living in areas of high socio-economic deprivation, foreign patients and those with 'no fixed abode'. Estimates of the readmission rate were biased for several patient groups owing to missed matches, which was reduced for nearly all groups. CONCLUSION: Probabilistic linkage of HES reduced missed matches and bias in estimated readmission rates, with clear implications for commissioning, service evaluation and performance monitoring of hospitals. The existing algorithm should be modified to address data linkage error, and a retrospective update of the existing data would address existing linkage errors and their implications.

  10. In-Situ Systematic Error Correction for Digital Volume Correlation Using a Reference Sample

    KAUST Repository

    Wang, B.

    2017-11-27

    The self-heating effect of a laboratory X-ray computed tomography (CT) scanner causes slight change in its imaging geometry, which induces translation and dilatation (i.e., artificial displacement and strain) in reconstructed volume images recorded at different times. To realize high-accuracy internal full-field deformation measurements using digital volume correlation (DVC), these artificial displacements and strains associated with unstable CT imaging must be eliminated. In this work, an effective and easily implemented reference sample compensation (RSC) method is proposed for in-situ systematic error correction in DVC. The proposed method utilizes a stationary reference sample, which is placed beside the test sample to record the artificial displacement fields caused by the self-heating effect of CT scanners. The detected displacement fields are then fitted by a parametric polynomial model, which is used to remove the unwanted artificial deformations in the test sample. Rescan tests of a stationary sample and real uniaxial compression tests performed on copper foam specimens demonstrate the accuracy, efficacy, and practicality of the presented RSC method.

  11. In-Situ Systematic Error Correction for Digital Volume Correlation Using a Reference Sample

    KAUST Repository

    Wang, B.; Pan, B.; Lubineau, Gilles

    2017-01-01

    The self-heating effect of a laboratory X-ray computed tomography (CT) scanner causes slight change in its imaging geometry, which induces translation and dilatation (i.e., artificial displacement and strain) in reconstructed volume images recorded at different times. To realize high-accuracy internal full-field deformation measurements using digital volume correlation (DVC), these artificial displacements and strains associated with unstable CT imaging must be eliminated. In this work, an effective and easily implemented reference sample compensation (RSC) method is proposed for in-situ systematic error correction in DVC. The proposed method utilizes a stationary reference sample, which is placed beside the test sample to record the artificial displacement fields caused by the self-heating effect of CT scanners. The detected displacement fields are then fitted by a parametric polynomial model, which is used to remove the unwanted artificial deformations in the test sample. Rescan tests of a stationary sample and real uniaxial compression tests performed on copper foam specimens demonstrate the accuracy, efficacy, and practicality of the presented RSC method.

  12. Evaluation and Comparison of the Processing Methods of Airborne Gravimetry Concerning the Errors Effects on Downward Continuation Results: Case Studies in Louisiana (USA) and the Tibetan Plateau (China).

    Science.gov (United States)

    Zhao, Qilong; Strykowski, Gabriel; Li, Jiancheng; Pan, Xiong; Xu, Xinyu

    2017-05-25

    Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The

  13. Evaluation and Comparison of the Processing Methods of Airborne Gravimetry Concerning the Errors Effects on Downward Continuation Results: Case Studies in Louisiana (USA) and the Tibetan Plateau (China)

    Science.gov (United States)

    Zhao, Q.

    2017-12-01

    Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The

  14. Reducing Technology-Induced Errors: Organizational and Health Systems Approaches.

    Science.gov (United States)

    Borycki, Elizabeth M; Senthriajah, Yalini; Kushniruk, Andre W; Palojoki, Sari; Saranto, Kaija; Takeda, Hiroshi

    2016-01-01

    Technology-induced errors are a growing concern for health care organizations. Such errors arise from the interaction between healthcare and information technology deployed in complex settings and contexts. As the number of health information technologies that are used to provide patient care rises so will the need to develop ways to improve the quality and safety of the technology that we use. The objective of the panel is to describe varying approaches to improving software safety from and organizational and health systems perspective. We define what a technology-induced error is. Then, we discuss how software design and testing can be used to improve health information technologies. This discussion is followed by work in the area of monitoring and reporting at a health district and national level. Lastly, we draw on the quality, safety and resilience literature. The target audience for this work are nursing and health informatics researchers, practitioners, administrators, policy makers and students.

  15. Avoiding Systematic Errors in Isometric Squat-Related Studies without Pre-Familiarization by Using Sufficient Numbers of Trials

    Directory of Open Access Journals (Sweden)

    Pekünlü Ekim

    2014-10-01

    Full Text Available There is no scientific evidence in the literature indicating that maximal isometric strength measures can be assessed within 3 trials. We questioned whether the results of isometric squat-related studies in which maximal isometric squat strength (MISS testing was performed using limited numbers of trials without pre-familiarization might have included systematic errors, especially those resulting from acute learning effects. Forty resistance-trained male participants performed 8 isometric squat trials without pre-familiarization. The highest measures in the first “n” trials (3 ≤ n ≤ 8 of these 8 squats were regarded as MISS obtained using 6 different MISS test methods featuring different numbers of trials (The Best of n Trials Method [BnT]. When B3T and B8T were paired with other methods, high reliability was found between the paired methods in terms of intraclass correlation coefficients (0.93-0.98 and coefficients of variation (3.4-7.0%. The Wilcoxon’s signed rank test indicated that MISS obtained using B3T and B8T were lower (p < 0.001 and higher (p < 0.001, respectively, than those obtained using other methods. The Bland- Altman method revealed a lack of agreement between any of the paired methods. Simulation studies illustrated that increasing the number of trials to 9-10 using a relatively large sample size (i.e., ≥ 24 could be an effective means of obtaining the actual MISS values of the participants. The common use of a limited number of trials in MISS tests without pre-familiarization appears to have no solid scientific base. Our findings suggest that the number of trials should be increased in commonly used MISS tests to avoid learning effect-related systematic errors

  16. Accounting for response misclassification and covariate measurement error improves power and reduces bias in epidemiologic studies.

    Science.gov (United States)

    Cheng, Dunlei; Branscum, Adam J; Stamey, James D

    2010-07-01

    To quantify the impact of ignoring misclassification of a response variable and measurement error in a covariate on statistical power, and to develop software for sample size and power analysis that accounts for these flaws in epidemiologic data. A Monte Carlo simulation-based procedure is developed to illustrate the differences in design requirements and inferences between analytic methods that properly account for misclassification and measurement error to those that do not in regression models for cross-sectional and cohort data. We found that failure to account for these flaws in epidemiologic data can lead to a substantial reduction in statistical power, over 25% in some cases. The proposed method substantially reduced bias by up to a ten-fold margin compared to naive estimates obtained by ignoring misclassification and mismeasurement. We recommend as routine practice that researchers account for errors in measurement of both response and covariate data when determining sample size, performing power calculations, or analyzing data from epidemiological studies. 2010 Elsevier Inc. All rights reserved.

  17. Impact of Educational Activities in Reducing Pre-Analytical Laboratory Errors: A quality initiative.

    Science.gov (United States)

    Al-Ghaithi, Hamed; Pathare, Anil; Al-Mamari, Sahimah; Villacrucis, Rodrigo; Fawaz, Naglaa; Alkindi, Salam

    2017-08-01

    Pre-analytic errors during diagnostic laboratory investigations can lead to increased patient morbidity and mortality. This study aimed to ascertain the effect of educational nursing activities on the incidence of pre-analytical errors resulting in non-conforming blood samples. This study was conducted between January 2008 and December 2015. All specimens received at the Haematology Laboratory of the Sultan Qaboos University Hospital, Muscat, Oman, during this period were prospectively collected and analysed. Similar data from 2007 were collected retrospectively and used as a baseline for comparison. Non-conforming samples were defined as either clotted samples, haemolysed samples, use of the wrong anticoagulant, insufficient quantities of blood collected, incorrect/lack of labelling on a sample or lack of delivery of a sample in spite of a sample request. From 2008 onwards, multiple educational training activities directed at the hospital nursing staff and nursing students primarily responsible for blood collection were implemented on a regular basis. After initiating corrective measures in 2008, a progressive reduction in the percentage of non-conforming samples was observed from 2009 onwards. Despite a 127.84% increase in the total number of specimens received, there was a significant reduction in non-conforming samples from 0.29% in 2007 to 0.07% in 2015, resulting in an improvement of 75.86% ( P educational activities directed primarily towards hospital nursing staff had a positive impact on the quality of laboratory specimens by significantly reducing pre-analytical errors.

  18. Standard error propagation in R-matrix model fitting for light elements

    International Nuclear Information System (INIS)

    Chen Zhenpeng; Zhang Rui; Sun Yeying; Liu Tingjin

    2003-01-01

    The error propagation features with R-matrix model fitting 7 Li, 11 B and 17 O systems were researched systematically. Some laws of error propagation were revealed, an empirical formula P j = U j c / U j d = K j · S-bar · √m / √N for describing standard error propagation was established, the most likely error ranges for standard cross sections of 6 Li(n,t), 10 B(n,α0) and 10 B(n,α1) were estimated. The problem that the standard error of light nuclei standard cross sections may be too small results mainly from the R-matrix model fitting, which is not perfect. Yet R-matrix model fitting is the most reliable evaluation method for such data. The error propagation features of R-matrix model fitting for compound nucleus system of 7 Li, 11 B and 17 O has been studied systematically, some laws of error propagation are revealed, and these findings are important in solving the problem mentioned above. Furthermore, these conclusions are suitable for similar model fitting in other scientific fields. (author)

  19. Error analysis of the microradiographical determination of mineral content in mineralised tissue slices

    International Nuclear Information System (INIS)

    Jong, E. de J. de; Bosch, J.J. ten

    1985-01-01

    The microradiographic method, used to measure the mineral content in slices of mineralised tissues as a function of position, is analysed. The total error in the measured mineral content is split into systematic errors per microradiogram and random noise errors. These errors are measured quantitatively. Predominant contributions to systematic errors appear to be x-ray beam inhomogeneity, the determination of the step wedge thickness and stray light in the densitometer microscope, while noise errors are under the influence of the choice of film, the value of the optical film transmission of the microradiographic image and the area of the densitometer window. Optimisation criteria are given. The authors used these criteria, together with the requirement that the method be fast and easy to build an optimised microradiographic system. (author)

  20. Toward a Framework for Systematic Error Modeling of NASA Spaceborne Radar with NOAA/NSSL Ground Radar-Based National Mosaic QPE

    Science.gov (United States)

    Kirstettier, Pierre-Emmanual; Honh, Y.; Gourley, J. J.; Chen, S.; Flamig, Z.; Zhang, J.; Howard, K.; Schwaller, M.; Petersen, W.; Amitai, E.

    2011-01-01

    Characterization of the error associated to satellite rainfall estimates is a necessary component of deterministic and probabilistic frameworks involving space-born passive and active microwave measurement") for applications ranging from water budget studies to forecasting natural hazards related to extreme rainfall events. We focus here on the error structure of NASA's Tropical Rainfall Measurement Mission (TRMM) Precipitation Radar (PR) quantitative precipitation estimation (QPE) at ground. The problem is addressed by comparison of PR QPEs with reference values derived from ground-based measurements using NOAA/NSSL ground radar-based National Mosaic and QPE system (NMQ/Q2). A preliminary investigation of this subject has been carried out at the PR estimation scale (instantaneous and 5 km) using a three-month data sample in the southern part of US. The primary contribution of this study is the presentation of the detailed steps required to derive trustworthy reference rainfall dataset from Q2 at the PR pixel resolution. It relics on a bias correction and a radar quality index, both of which provide a basis to filter out the less trustworthy Q2 values. Several aspects of PR errors arc revealed and quantified including sensitivity to the processing steps with the reference rainfall, comparisons of rainfall detectability and rainfall rate distributions, spatial representativeness of error, and separation of systematic biases and random errors. The methodology and framework developed herein applies more generally to rainfall rate estimates from other sensors onboard low-earth orbiting satellites such as microwave imagers and dual-wavelength radars such as with the Global Precipitation Measurement (GPM) mission.

  1. Detecting errors in micro and trace analysis by using statistics

    DEFF Research Database (Denmark)

    Heydorn, K.

    1993-01-01

    By assigning a standard deviation to each step in an analytical method it is possible to predict the standard deviation of each analytical result obtained by this method. If the actual variability of replicate analytical results agrees with the expected, the analytical method is said...... to be in statistical control. Significant deviations between analytical results from different laboratories reveal the presence of systematic errors, and agreement between different laboratories indicate the absence of systematic errors. This statistical approach, referred to as the analysis of precision, was applied...

  2. Does Chinese calligraphy therapy reduce neuropsychiatric symptoms: a systematic review and meta-analysis.

    Science.gov (United States)

    Chu, Kuan-Yu; Huang, Chih-Yang; Ouyang, Wen-Chen

    2018-03-07

    There are currently no systematic reviews or meta-analyses of Chinese calligraphy therapy (CCT) to reduce neuropsychiatric symptoms. The aim of this systematic review and meta-analysis was to explore the efficacy of CCT for people with neuropsychiatric symptoms. We searched Chinese and English databases, including the Cochrane Central Register of Controlled Trials and Wanfang Data for relevant articles published between the earliest year available and December 2016. The search was limited to randomized controlled trials and controlled clinical studies and the associated keywords were "handwriting," "Chinese calligraphy," "Chinese calligraphy therapy," "Calligraphy exercise," and "Calligraphy training." The 21 articles that met these criteria were used in the analysis. The Joanna Briggs Institute critical appraisal checklist was used to assess methodological quality. CCT significantly reduced psychosis (10 studies, 965 subjects, standardized mean difference [SMD] = - 0.17, 95% confidence intervals [CI] [- 0.30, - 0.40], Z = 2.60, p schizophrenia symptoms (4 studies, 287 subjects, SMD = - 0.35, 95% CI [- 0.59, - 0.12], Z = 2.96, p = 0.003) and reduced the negative symptoms of schizophrenia (4 studies, 276 subjects, SMD = - 1.39, 95% CI [- 1.65, - 1.12], Z = 10.23, p < 0.001). CCT exerts a curative effect on neuropsychiatric symptoms, but the evidence remains insufficient. A large number of RCTs are needed to facilitate additional systematic reviews of evidence for CCT.

  3. Reducing Diagnostic Errors through Effective Communication: Harnessing the Power of Information Technology

    Science.gov (United States)

    Naik, Aanand Dinkar; Rao, Raghuram; Petersen, Laura Ann

    2008-01-01

    Diagnostic errors are poorly understood despite being a frequent cause of medical errors. Recent efforts have aimed to advance the "basic science" of diagnostic error prevention by tracing errors to their most basic origins. Although a refined theory of diagnostic error prevention will take years to formulate, we focus on communication breakdown, a major contributor to diagnostic errors and an increasingly recognized preventable factor in medical mishaps. We describe a comprehensive framework that integrates the potential sources of communication breakdowns within the diagnostic process and identifies vulnerable steps in the diagnostic process where various types of communication breakdowns can precipitate error. We then discuss potential information technology-based interventions that may have efficacy in preventing one or more forms of these breakdowns. These possible intervention strategies include using new technologies to enhance communication between health providers and health systems, improve patient involvement, and facilitate management of information in the medical record. PMID:18373151

  4. Nonlinear mechanism for the suppression of error field magnetic islands by plasma flow

    International Nuclear Information System (INIS)

    Parker, R.D.

    1992-01-01

    Non-axisymmetric magnetic field perturbations generated, for example, by errors in the alignment of the field coils are known to lead to reduced confinement in a tokamak. By inducing the formation of small, stationary, magnetic islands on all rational surfaces they can enhance radial transport and under certain circumstances interact with MHD instabilities to trigger the onset of locked modes leading, in some cases, to disruption of the plasma discharge. Given the stationary nature of the error field islands it is natural to consider whether they can be reduced significantly by the viscous drag of a sheared flow resulting from a bulk rotation of the plasma. In this paper, we examine this interaction by modelling the nonlinear growth and saturation of force-reconnected magnetic islands driven by a corrugated boundary in a slab plasma with an initially uniform flow. A systematic parameter study is made of the time asymptotic steady state. (author) 3 figs., 5 refs

  5. Experiences of and support for nurses as second victims of adverse nursing errors: a qualitative systematic review.

    Science.gov (United States)

    Cabilan, C J; Kynoch, Kathryn

    2017-09-01

    Second victims are clinicians who have made adverse errors and feel traumatized by the experience. The current published literature on second victims is mainly representative of doctors, hence nurses' experiences are not fully depicted. This systematic review was necessary to understand the second victim experience for nurses, explore the support provided, and recommend appropriate support systems for nurses. To synthesize the best available evidence on nurses' experiences as second victims, and explore their experiences of the support they receive and the support they need. Participants were registered nurses who made adverse errors. The review included studies that described nurses' experiences as second victims and/or the support they received after making adverse errors. All studies conducted in any health care settings worldwide. The qualitative studies included were grounded theory, discourse analysis and phenomenology. A structured search strategy was used to locate all unpublished and published qualitative studies, but was limited to the English language, and published between 1980 and February 2017. The references of studies selected for eligibility screening were hand-searched for additional literature. Eligible studies were assessed by two independent reviewers for methodological quality using a standardized critical appraisal instrument from the Joanna Briggs Institute Qualitative Assessment and Review Instrument (JBI QARI). Themes and narrative statements were extracted from papers included in the review using the standardized data extraction tool from JBI QARI. Data synthesis was conducted using the Joanna Briggs Institute meta-aggregation approach. There were nine qualitative studies included in the review. The narratives of 284 nurses generated a total of 43 findings, which formed 15 categories based on similarity of meaning. Four synthesized findings were generated from the categories: (i) The error brings a considerable emotional burden to the

  6. Checklist Usage as a Guidance on Read-Back Reducing the Potential Risk of Medication Error

    Directory of Open Access Journals (Sweden)

    Ida Bagus N. Maharjana

    2014-06-01

    Full Text Available Hospital as a last line of health services shall provide quality service and oriented on patient safety, one responsibility in preventing medication errors. Effective collaboration and communication between the profession needed to achieve patient safety. Read-back is one way of doing effective communication. Before-after study with PDCA TQM approach. The samples were on the medication chart patient medical rd rd records in the 3 week of May (before and the 3 week in July (after 2013. Treatment using the check list, asked for time 2 minutes to read-back by the doctors and nurses after the visit together. Obtained 57 samples (before and 64 samples (after. Before charging 45.54% incomplete medication chart on patient medical records that have the potential risk of medication error to 10.17% after treatment with a read back check list for 10 weeks, with 77.78% based on the achievement of the PDCA TQM approach. Checklist usage as a guidance on Read-back as an effective communication can reduce charging incompleteness drug records on medical records that have the potential risk of medication errors, 45.54% to 10.17%.

  7. Hypothermia after cardiac arrest should be further evaluated-A systematic review of randomised trials with meta-analysis and trial sequential analysis

    DEFF Research Database (Denmark)

    Nielsen, Niklas; Friberg, Hans; Gluud, Christian

    2011-01-01

    BACKGROUND: Guidelines recommend mild induced hypothermia (MIH) to reduce mortality and neurological impairment after out-of-hospital cardiac arrest. Our objective was to systematically evaluate the evidence for MIH taking into consideration the risks of systematic and random error and to GRADE...... the evidence. METHODS: Systematic review with meta-analysis and trial sequential analysis of randomised trials evaluating MIH after cardiac arrest in adults. We searched CENTRAL, MEDLINE, and EMBASE databases until May 2009. Retrieved trials were evaluated with Cochrane methodology. Meta-analytic estimates....... The substantial risk of bias and concerns with directness rated down the quality of the evidence to low. CONCLUSIONS: Evidence regarding MIH after out-of-hospital cardiac arrest is still inconclusive and associated with non-negligible risks of systematic and random errors. Using GRADE-methodology, we conclude...

  8. Trend analysis of human error events and assessment of their proactive prevention measure at Rokkasho reprocessing plant

    International Nuclear Information System (INIS)

    Yamazaki, Satoru; Tanaka, Izumi; Wakabayashi, Toshio

    2012-01-01

    A trend analysis of human error events is important for preventing the recurrence of human error events. We propose a new method for identifying the common characteristics from results of trend analysis, such as the latent weakness of organization, and a management process for strategic error prevention. In this paper, we describe a trend analysis method for human error events that have been accumulated in the organization and the utilization of the results of trend analysis to prevent accidents proactively. Although the systematic analysis of human error events, the monitoring of their overall trend, and the utilization of the analyzed results have been examined for the plant operation, such information has never been utilized completely. Sharing information on human error events and analyzing their causes lead to the clarification of problems in the management and human factors. This new method was applied to the human error events that occurred in the Rokkasho reprocessing plant from 2010 October. Results revealed that the output of this method is effective in judging the error prevention plan and that the number of human error events is reduced to about 50% those observed in 2009 and 2010. (author)

  9. The effectiveness of interventions for reducing stigma related to substance use disorders: a systematic review

    OpenAIRE

    Livingston, James D; Milne, Teresa; Fang, Mei Lan; Amari, Erica

    2012-01-01

    Aims This study provides a systematic review of existing research that has empirically evaluated interventions designed to reduce stigma related to substance use disorders. Methods A comprehensive review of electronic databases was conducted to identify evaluations of substance use disorder related stigma interventions. Studies that met inclusion criteria were synthesized and assessed using systematic review methods. Results Thirteen studies met the inclusion criteria. The methodological qual...

  10. SIMULATION OF INERTIAL NAVIGATION SYSTEM ERRORS AT AERIAL PHOTOGRAPHY FROM UAV

    Directory of Open Access Journals (Sweden)

    R. Shults

    2017-05-01

    Full Text Available The problem of accuracy determination of the UAV position using INS at aerial photography can be resolved in two different ways: modelling of measurement errors or in-field calibration for INS. The paper presents the results of INS errors research by mathematical modelling. In paper were considered the following steps: developing of INS computer model; carrying out INS simulation; using reference data without errors, estimation of errors and their influence on maps creation accuracy by UAV data. It must be remembered that the values of orientation angles and the coordinates of the projection centre may change abruptly due to the influence of the atmosphere (different air density, wind, etc.. Therefore, the mathematical model of the INS was constructed taking into account the use of different models of wind gusts. For simulation were used typical characteristics of micro electromechanical (MEMS INS and parameters of standard atmosphere. According to the simulation established domination of INS systematic errors that accumulate during the execution of photographing and require compensation mechanism, especially for orientation angles. MEMS INS have a high level of noise at the system input. Thanks to the developed model, we are able to investigate separately the impact of noise in the absence of systematic errors. According to the research was found that on the interval of observations in 5 seconds the impact of random and systematic component is almost the same. The developed model of INS errors studies was implemented in Matlab software environment and without problems can be improved and enhanced with new blocks.

  11. LOWER BOUNDS ON PHOTOMETRIC REDSHIFT ERRORS FROM TYPE Ia SUPERNOVA TEMPLATES

    International Nuclear Information System (INIS)

    Asztalos, S.; Nikolaev, S.; De Vries, W.; Olivier, S.; Cook, K.; Wang, L.

    2010-01-01

    Cosmology with Type Ia supernova heretofore has required extensive spectroscopic follow-up to establish an accurate redshift. Though this resource-intensive approach is tolerable at the present discovery rate, the next generation of ground-based all-sky survey instruments will render it unsustainable. Photometry-based redshift determination may be a viable alternative, though the technique introduces non-negligible errors that ultimately degrade the ability to discriminate between competing cosmologies. We present a strictly template-based photometric redshift estimator and compute redshift reconstruction errors in the presence of statistical errors. Under highly degraded photometric conditions corresponding to a statistical error σ of 0.5, the residual redshift error is found to be 0.236 when assuming a nightly observing cadence and a single Large Synoptic Science Telescope (LSST) u-band filter. Utilizing all six LSST bandpass filters reduces the residual redshift error to 9.1 x 10 -3 . Assuming a more optimistic statistical error σ of 0.05, we derive residual redshift errors of 4.2 x 10 -4 , 5.2 x 10 -4 , 9.2 x 10 -4 , and 1.8 x 10 -3 for observations occuring nightly, every 5th, 20th and 45th night, respectively, in each of the six LSST bandpass filters. Adopting an observing cadence in which photometry is acquired with all six filters every 5th night and a realistic supernova distribution, binned redshift errors are combined with photometric errors with a σ of 0.17 and systematic errors with a σ∼ 0.003 to derive joint errors (σ w , σ w ' ) of (0.012, 0.066), respectively, in (w,w') with 68% confidence using Fisher matrix formalism. Though highly idealized in the present context, the methodology is nonetheless quite relevant for the next generation of ground-based all-sky surveys.

  12. Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators

    Science.gov (United States)

    Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.

    2018-03-01

    We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.

  13. Using total quality management approach to improve patient safety by preventing medication error incidences*.

    Science.gov (United States)

    Yousef, Nadin; Yousef, Farah

    2017-09-04

    Whereas one of the predominant causes of medication errors is a drug administration error, a previous study related to our investigations and reviews estimated that the incidences of medication errors constituted 6.7 out of 100 administrated medication doses. Therefore, we aimed by using six sigma approach to propose a way that reduces these errors to become less than 1 out of 100 administrated medication doses by improving healthcare professional education and clearer handwritten prescriptions. The study was held in a General Government Hospital. First, we systematically studied the current medication use process. Second, we used six sigma approach by utilizing the five-step DMAIC process (Define, Measure, Analyze, Implement, Control) to find out the real reasons behind such errors. This was to figure out a useful solution to avoid medication error incidences in daily healthcare professional practice. Data sheet was used in Data tool and Pareto diagrams were used in Analyzing tool. In our investigation, we reached out the real cause behind administrated medication errors. As Pareto diagrams used in our study showed that the fault percentage in administrated phase was 24.8%, while the percentage of errors related to prescribing phase was 42.8%, 1.7 folds. This means that the mistakes in prescribing phase, especially because of the poor handwritten prescriptions whose percentage in this phase was 17.6%, are responsible for the consequent) mistakes in this treatment process later on. Therefore, we proposed in this study an effective low cost strategy based on the behavior of healthcare workers as Guideline Recommendations to be followed by the physicians. This method can be a prior caution to decrease errors in prescribing phase which may lead to decrease the administrated medication error incidences to less than 1%. This improvement way of behavior can be efficient to improve hand written prescriptions and decrease the consequent errors related to administrated

  14. Component Analysis of Errors on PERSIANN Precipitation Estimates over Urmia Lake Basin, IRAN

    Science.gov (United States)

    Ghajarnia, N.; Daneshkar Arasteh, P.; Liaghat, A. M.; Araghinejad, S.

    2016-12-01

    In this study, PERSIANN daily dataset is evaluated from 2000 to 2011 in 69 pixels over Urmia Lake basin in northwest of Iran. Different analytical approaches and indexes are used to examine PERSIANN precision in detection and estimation of rainfall rate. The residuals are decomposed into Hit, Miss and FA estimation biases while continues decomposition of systematic and random error components are also analyzed seasonally and categorically. New interpretation of estimation accuracy named "reliability on PERSIANN estimations" is introduced while the changing manners of existing categorical/statistical measures and error components are also seasonally analyzed over different rainfall rate categories. This study yields new insights into the nature of PERSIANN errors over Urmia lake basin as a semi-arid region in the middle-east, including the followings: - The analyzed contingency table indexes indicate better detection precision during spring and fall. - A relatively constant level of error is generally observed among different categories. The range of precipitation estimates at different rainfall rate categories is nearly invariant as a sign for the existence of systematic error. - Low level of reliability is observed on PERSIANN estimations at different categories which are mostly associated with high level of FA error. However, it is observed that as the rate of precipitation increase, the ability and precision of PERSIANN in rainfall detection also increases. - The systematic and random error decomposition in this area shows that PERSIANN has more difficulty in modeling the system and pattern of rainfall rather than to have bias due to rainfall uncertainties. The level of systematic error also considerably increases in heavier rainfalls. It is also important to note that PERSIANN error characteristics at each season varies due to the condition and rainfall patterns of that season which shows the necessity of seasonally different approach for the calibration of

  15. Notes on human error analysis and prediction

    International Nuclear Information System (INIS)

    Rasmussen, J.

    1978-11-01

    The notes comprise an introductory discussion of the role of human error analysis and prediction in industrial risk analysis. Following this introduction, different classes of human errors and role in industrial systems are mentioned. Problems related to the prediction of human behaviour in reliability and safety analysis are formulated and ''criteria for analyzability'' which must be met by industrial systems so that a systematic analysis can be performed are suggested. The appendices contain illustrative case stories and a review of human error reports for the task of equipment calibration and testing as found in the US Licensee Event Reports. (author)

  16. A study on the critical factors of human error in civil aviation: An early warning management perspective in Bangladesh

    Directory of Open Access Journals (Sweden)

    Md. Salah Uddin Rajib

    2015-01-01

    Full Text Available The safety of civil aviation will be more secured if the errors in all the facets can be reduced. Like the other industrial sectors, human resource is one of the most complex and sensitive resources for the civil aviation. The error of human resources can cause fatal disasters. In these days, a good volume of researches have been conducted on the disaster of civil aviation. The researchers have identified the causes of the civil aviation disasters from various perspectives. They identified the areas where more concern is needed to reduce the disastrous impacts. This paper aims to find out the critical factors of human error in civil aviation in a developing country (Bangladesh as it is accepted that human error is one of main causes of civil aviation disasters. The paper reviews the previous research to find out the critical factors conceptually. Fuzzy analytical hierarchy process (FAHP has been used to find out the critical factors systematically. Analyses indicate that the concentration on precondition for unsafe acts (including sub-factors is required to ensure the aviation safety.

  17. Reducing healthcare costs facilitated by surgical auditing: a systematic review.

    Science.gov (United States)

    Govaert, Johannes Arthuur; van Bommel, Anne Charlotte Madeline; van Dijk, Wouter Antonie; van Leersum, Nicoline Johanneke; Tollenaar, Robertus Alexandre Eduard Mattheus; Wouters, Michael Wilhemus Jacobus Maria

    2015-07-01

    Surgical auditing has been developed in order to benchmark and to facilitate quality improvement. The aim of this review is to determine if auditing combined with systematic feedback of information on process and outcomes of care results in lower costs of surgical care. A systematic search of published literature before 21-08-2013 was conducted in Pubmed, Embase, Web of Science, and Cochrane Library. Articles were selected if they met the inclusion criteria of describing a surgical audit with cost-evaluation. The systematic search resulted in 3608 papers. Six studies were identified as relevant, all showing a positive effect of surgical auditing on quality of healthcare and therefore cost savings was reported. Cost reductions ranging from $16 to $356 per patient were seen in audits evaluating general or vascular procedures. The highest potential cost reduction was described in a colorectal surgical audit (up to $1,986 per patient). All six identified articles in this review describe a reduction in complications and thereby a reduction in costs due to surgical auditing. Surgical auditing may be of greater value when high-risk procedures are evaluated, since prevention of adverse events in these procedures might be of greater clinical and therefore of greater financial impact. This systematic review shows that surgical auditing can function as a quality instrument and therefore as a tool to reduce costs. Since evidence is scarce so far, further studies should be performed to investigate if surgical auditing has positive effects to turn the rising healthcare costs around. In the future, incorporating (actual) cost analyses and patient-related outcome measures would increase the audits' value and provide a complete overview of the value of healthcare.

  18. Theory of Test Translation Error

    Science.gov (United States)

    Solano-Flores, Guillermo; Backhoff, Eduardo; Contreras-Nino, Luis Angel

    2009-01-01

    In this article, we present a theory of test translation whose intent is to provide the conceptual foundation for effective, systematic work in the process of test translation and test translation review. According to the theory, translation error is multidimensional; it is not simply the consequence of defective translation but an inevitable fact…

  19. Drought Persistence Errors in Global Climate Models

    Science.gov (United States)

    Moon, H.; Gudmundsson, L.; Seneviratne, S. I.

    2018-04-01

    The persistence of drought events largely determines the severity of socioeconomic and ecological impacts, but the capability of current global climate models (GCMs) to simulate such events is subject to large uncertainties. In this study, the representation of drought persistence in GCMs is assessed by comparing state-of-the-art GCM model simulations to observation-based data sets. For doing so, we consider dry-to-dry transition probabilities at monthly and annual scales as estimates for drought persistence, where a dry status is defined as negative precipitation anomaly. Though there is a substantial spread in the drought persistence bias, most of the simulations show systematic underestimation of drought persistence at global scale. Subsequently, we analyzed to which degree (i) inaccurate observations, (ii) differences among models, (iii) internal climate variability, and (iv) uncertainty of the employed statistical methods contribute to the spread in drought persistence errors using an analysis of variance approach. The results show that at monthly scale, model uncertainty and observational uncertainty dominate, while the contribution from internal variability is small in most cases. At annual scale, the spread of the drought persistence error is dominated by the statistical estimation error of drought persistence, indicating that the partitioning of the error is impaired by the limited number of considered time steps. These findings reveal systematic errors in the representation of drought persistence in current GCMs and suggest directions for further model improvement.

  20. Error Analysis of Satellite Precipitation-Driven Modeling of Flood Events in Complex Alpine Terrain

    Directory of Open Access Journals (Sweden)

    Yiwen Mei

    2016-03-01

    Full Text Available The error in satellite precipitation-driven complex terrain flood simulations is characterized in this study for eight different global satellite products and 128 flood events over the Eastern Italian Alps. The flood events are grouped according to two flood types: rain floods and flash floods. The satellite precipitation products and runoff simulations are evaluated based on systematic and random error metrics applied on the matched event pairs and basin-scale event properties (i.e., rainfall and runoff cumulative depth and time series shape. Overall, error characteristics exhibit dependency on the flood type. Generally, timing of the event precipitation mass center and dispersion of the time series derived from satellite precipitation exhibits good agreement with the reference; the cumulative depth is mostly underestimated. The study shows a dampening effect in both systematic and random error components of the satellite-driven hydrograph relative to the satellite-retrieved hyetograph. The systematic error in shape of the time series shows a significant dampening effect. The random error dampening effect is less pronounced for the flash flood events and the rain flood events with a high runoff coefficient. This event-based analysis of the satellite precipitation error propagation in flood modeling sheds light on the application of satellite precipitation in mountain flood hydrology.

  1. Towards a systematic assessment of errors in diffusion Monte Carlo calculations of semiconductors: Case study of zinc selenide and zinc oxide

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Jaehyung [Department of Mechanical Science and Engineering, 1206 W Green Street, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States); Wagner, Lucas K. [Department of Physics, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States); Ertekin, Elif, E-mail: ertekin@illinois.edu [Department of Mechanical Science and Engineering, 1206 W Green Street, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States); International Institute for Carbon Neutral Energy Research - WPI-I" 2CNER, Kyushu University, 744 Moto-oka, Nishi-ku, Fukuoka 819-0395 (Japan)

    2015-12-14

    The fixed node diffusion Monte Carlo (DMC) method has attracted interest in recent years as a way to calculate properties of solid materials with high accuracy. However, the framework for the calculation of properties such as total energies, atomization energies, and excited state energies is not yet fully established. Several outstanding questions remain as to the effect of pseudopotentials, the magnitude of the fixed node error, and the size of supercell finite size effects. Here, we consider in detail the semiconductors ZnSe and ZnO and carry out systematic studies to assess the magnitude of the energy differences arising from controlled and uncontrolled approximations in DMC. The former include time step errors and supercell finite size effects for ground and optically excited states, and the latter include pseudopotentials, the pseudopotential localization approximation, and the fixed node approximation. We find that for these compounds, the errors can be controlled to good precision using modern computational resources and that quantum Monte Carlo calculations using Dirac-Fock pseudopotentials can offer good estimates of both cohesive energy and the gap of these systems. We do however observe differences in calculated optical gaps that arise when different pseudopotentials are used.

  2. Error-finding and error-correcting methods for the start-up of the SLC

    International Nuclear Information System (INIS)

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.; Selig, L.J.

    1987-02-01

    During the commissioning of an accelerator, storage ring, or beam transfer line, one of the important tasks of an accelertor physicist is to check the first-order optics of the beam line and to look for errors in the system. Conceptually, it is important to distinguish between techniques for finding the machine errors that are the cause of the problem and techniques for correcting the beam errors that are the result of the machine errors. In this paper we will limit our presentation to certain applications of these two methods for finding or correcting beam-focus errors and beam-kick errors that affect the profile and trajectory of the beam respectively. Many of these methods have been used successfully in the commissioning of SLC systems. In order not to waste expensive beam time we have developed and used a beam-line simulator to test the ideas that have not been tested experimentally. To save valuable physicist's time we have further automated the beam-kick error-finding procedures by adopting methods from the field of artificial intelligence to develop a prototype expert system. Our experience with this prototype has demonstrated the usefulness of expert systems in solving accelerator control problems. The expert system is able to find the same solutions as an expert physicist but in a more systematic fashion. The methods used in these procedures and some of the recent applications will be described in this paper

  3. Impact of MLC leaf position errors on simple and complex IMRT plans for head and neck cancer

    International Nuclear Information System (INIS)

    Mu, G; Ludlum, E; Xia, P

    2008-01-01

    The dosimetric impact of random and systematic multi-leaf collimator (MLC) leaf position errors is relatively unknown for head and neck intensity-modulated radiotherapy (IMRT) patients. In this report we studied 17 head and neck IMRT patients, including 12 treated with simple plans ( 100 segments). Random errors (-2 to +2 mm) and systematic errors (±0.5 mm and ±1 mm) in MLC leaf positions were introduced into the clinical plans and the resultant dose distributions were analyzed based on defined endpoint doses. The dosimetric effect was insignificant for random MLC leaf position errors up to 2 mm for both simple and complex plans. However, for systematic MLC leaf position errors, we found significant dosimetric differences between the simple and complex IMRT plans. For 1 mm systematic error, the average changes in D 95% were 4% in simple plans versus 8% in complex plans. The average changes in D 0.1cc of the spinal cord and brain stem were 4% in simple plans versus 12% in complex plans. The average changes in parotid glands were 9% in simple plans versus 13% for the complex plans. Overall, simple IMRT plans are less sensitive to leaf position errors than complex IMRT plans

  4. Making Residents Part of the Safety Culture: Improving Error Reporting and Reducing Harms.

    Science.gov (United States)

    Fox, Michael D; Bump, Gregory M; Butler, Gabriella A; Chen, Ling-Wan; Buchert, Andrew R

    2017-01-30

    Reporting medical errors is a focus of the patient safety movement. As frontline physicians, residents are optimally positioned to recognize errors and flaws in systems of care. Previous work highlights the difficulty of engaging residents in identification and/or reduction of medical errors and in integrating these trainees into their institutions' cultures of safety. The authors describe the implementation of a longitudinal, discipline-based, multifaceted curriculum to enhance the reporting of errors by pediatric residents at Children's Hospital of Pittsburgh of University of Pittsburgh Medical Center. The key elements of this curriculum included providing the necessary education to identify medical errors with an emphasis on systems-based causes, modeling of error reporting by faculty, and integrating error reporting and discussion into the residents' daily activities. The authors tracked monthly error reporting rates by residents and other health care professionals, in addition to serious harm event rates at the institution. The interventions resulted in significant increases in error reports filed by residents, from 3.6 to 37.8 per month over 4 years (P error reporting correlated with a decline in serious harm events, from 15.0 to 8.1 per month over 4 years (P = 0.01). Integrating patient safety into the everyday resident responsibilities encourages frequent reporting and discussion of medical errors and leads to improvements in patient care. Multiple simultaneous interventions are essential to making residents part of the safety culture of their training hospitals.

  5. Systematic Review: Audiovisual Interventions for Reducing Preoperative Anxiety in Children Undergoing Elective Surgery

    Science.gov (United States)

    Van Lieshout, Ryan J.; Schmidt, Louis A.; Dobson, Kathleen G.; Buckley, Norman

    2016-01-01

    Objective To examine the effectiveness of Audiovisual (AV) interventions at reducing preoperative anxiety and its associated outcomes in children undergoing elective surgery. Methods A systematic review of randomized controlled trials (RCTs) and nonrandomized studies where the primary outcome was children’s preoperative anxiety was conducted. Secondary outcomes included postoperative pain, behavioral changes, recovery, induction compliance, satisfaction, and cost-effectiveness. The risk of bias of each study was assessed. Results In all, 18 studies were identified. A meta-analytic approach and narrative synthesis of findings were used to summarize the results of the studies. Conclusions This systematic review suggests that AV interventions can be effective in reducing children’s preoperative anxiety. Videos, multi-faceted programs, and interactive games appear to be most effective, whereas music therapy and Internet programs are less effective. While AV interventions appear potentially useful, adequately powered RCTs are required to conclusively pinpoint the components and mechanisms of the most effective AV interventions and guide practice. PMID:26476281

  6. Systematic analysis of dependent human errors from the maintenance history at finnish NPPs - A status report

    Energy Technology Data Exchange (ETDEWEB)

    Laakso, K. [VTT Industrial Systems (Finland)

    2002-12-01

    Operating experience has shown missed detection events, where faults have passed inspections and functional tests to operating periods after the maintenance activities during the outage. The causes of these failures have often been complex event sequences, involving human and organisational factors. Especially common cause and other dependent failures of safety systems may significantly contribute to the reactor core damage risk. The topic has been addressed in the Finnish studies of human common cause failures, where experiences on latent human errors have been searched and analysed in detail from the maintenance history. The review of the bulk of the analysis results of the Olkiluoto and Loviisa plant sites shows that the instrumentation and control and electrical equipment is more prone to human error caused failure events than the other maintenance and that plant modifications and also predetermined preventive maintenance are significant sources of common cause failures. Most errors stem from the refuelling and maintenance outage period at the both sites, and less than half of the dependent errors were identified during the same outage. The dependent human errors originating from modifications could be reduced by a more tailored specification and coverage of their start-up testing programs. Improvements could also be achieved by a more case specific planning of the installation inspection and functional testing of complicated maintenance works or work objects of higher plant safety and availability importance. A better use and analysis of condition monitoring information for maintenance steering could also help. The feedback from discussions of the analysis results with plant experts and professionals is still crucial in developing the final conclusions and recommendations that meet the specific development needs at the plants. (au)

  7. The Acquisition of Subject-Verb Agreement in Written French: From Novices to Experts' Errors.

    Science.gov (United States)

    Fayol, Michel; Largy, Pierre; Hupet, Michel

    1999-01-01

    Aims at demonstrating the gradual automatization of subject-verb agreement operation in young writers by examining developmental changes in the occurrence of agreement errors. Finds that subjects' performance moved from systematic errors to attraction errors through an intermediate phase. Concludes that attraction errors are a byproduct of the…

  8. Phase Error Modeling and Its Impact on Precise Orbit Determination of GRACE Satellites

    Directory of Open Access Journals (Sweden)

    Jia Tu

    2012-01-01

    Full Text Available Limiting factors for the precise orbit determination (POD of low-earth orbit (LEO satellite using dual-frequency GPS are nowadays mainly encountered with the in-flight phase error modeling. The phase error is modeled as a systematic and a random component each depending on the direction of GPS signal reception. The systematic part and standard deviation of random part in phase error model are, respectively, estimated by bin-wise mean and standard deviation values of phase postfit residuals computed by orbit determination. By removing the systematic component and adjusting the weight of phase observation data according to standard deviation of random component, the orbit can be further improved by POD approach. The GRACE data of 1–31 January 2006 are processed, and three types of orbit solutions, POD without phase error model correction, POD with mean value correction of phase error model, and POD with phase error model correction, are obtained. The three-dimensional (3D orbit improvements derived from phase error model correction are 0.0153 m for GRACE A and 0.0131 m for GRACE B, and the 3D influences arisen from random part of phase error model are 0.0068 m and 0.0075 m for GRACE A and GRACE B, respectively. Thus the random part of phase error model cannot be neglected for POD. It is also demonstrated by phase postfit residual analysis, orbit comparison with JPL precise science orbit, and orbit validation with KBR data that the results derived from POD with phase error model correction are better than another two types of orbit solutions generated in this paper.

  9. SU-E-I-83: Error Analysis of Multi-Modality Image-Based Volumes of Rodent Solid Tumors Using a Preclinical Multi-Modality QA Phantom

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Y [University of Kansas Hospital, Kansas City, KS (United States); Fullerton, G; Goins, B [University of Texas Health Science Center at San Antonio, San Antonio, TX (United States)

    2015-06-15

    Purpose: In our previous study a preclinical multi-modality quality assurance (QA) phantom that contains five tumor-simulating test objects with 2, 4, 7, 10 and 14 mm diameters was developed for accurate tumor size measurement by researchers during cancer drug development and testing. This study analyzed the errors during tumor volume measurement from preclinical magnetic resonance (MR), micro-computed tomography (micro- CT) and ultrasound (US) images acquired in a rodent tumor model using the preclinical multi-modality QA phantom. Methods: Using preclinical 7-Tesla MR, US and micro-CT scanners, images were acquired of subcutaneous SCC4 tumor xenografts in nude rats (3–4 rats per group; 5 groups) along with the QA phantom using the same imaging protocols. After tumors were excised, in-air micro-CT imaging was performed to determine reference tumor volume. Volumes measured for the rat tumors and phantom test objects were calculated using formula V = (π/6)*a*b*c where a, b and c are the maximum diameters in three perpendicular dimensions determined by the three imaging modalities. Then linear regression analysis was performed to compare image-based tumor volumes with the reference tumor volume and known test object volume for the rats and the phantom respectively. Results: The slopes of regression lines for in-vivo tumor volumes measured by three imaging modalities were 1.021, 1.101 and 0.862 for MRI, micro-CT and US respectively. For phantom, the slopes were 0.9485, 0.9971 and 0.9734 for MRI, micro-CT and US respectively. Conclusion: For both animal and phantom studies, random and systematic errors were observed. Random errors were observer-dependent and systematic errors were mainly due to selected imaging protocols and/or measurement method. In the animal study, there were additional systematic errors attributed to ellipsoidal assumption for tumor shape. The systematic errors measured using the QA phantom need to be taken into account to reduce measurement

  10. Professional, structural and organisational interventions in primary care for reducing medication errors.

    Science.gov (United States)

    Khalil, Hanan; Bell, Brian; Chambers, Helen; Sheikh, Aziz; Avery, Anthony J

    2017-10-04

    Medication-related adverse events in primary care represent an important cause of hospital admissions and mortality. Adverse events could result from people experiencing adverse drug reactions (not usually preventable) or could be due to medication errors (usually preventable). To determine the effectiveness of professional, organisational and structural interventions compared to standard care to reduce preventable medication errors by primary healthcare professionals that lead to hospital admissions, emergency department visits, and mortality in adults. We searched CENTRAL, MEDLINE, Embase, three other databases, and two trial registries on 4 October 2016, together with reference checking, citation searching and contact with study authors to identify additional studies. We also searched several sources of grey literature. We included randomised trials in which healthcare professionals provided community-based medical services. We also included interventions in outpatient clinics attached to a hospital where people are seen by healthcare professionals but are not admitted to hospital. We only included interventions that aimed to reduce medication errors leading to hospital admissions, emergency department visits, or mortality. We included all participants, irrespective of age, who were prescribed medication by a primary healthcare professional. Three review authors independently extracted data. Each of the outcomes (hospital admissions, emergency department visits, and mortality), are reported in natural units (i.e. number of participants with an event per total number of participants at follow-up). We presented all outcomes as risk ratios (RRs) with 95% confidence intervals (CIs). We used the GRADE tool to assess the certainty of evidence. We included 30 studies (169,969 participants) in the review addressing various interventions to prevent medication errors; four studies addressed professional interventions (8266 participants) and 26 studies described

  11. Nurses' Behaviors and Visual Scanning Patterns May Reduce Patient Identification Errors

    Science.gov (United States)

    Marquard, Jenna L.; Henneman, Philip L.; He, Ze; Jo, Junghee; Fisher, Donald L.; Henneman, Elizabeth A.

    2011-01-01

    Patient identification (ID) errors occurring during the medication administration process can be fatal. The aim of this study is to determine whether differences in nurses' behaviors and visual scanning patterns during the medication administration process influence their capacities to identify patient ID errors. Nurse participants (n = 20)…

  12. Random and systematic errors in case–control studies calculating the injury risk of driving under the influence of psychoactive substances

    DEFF Research Database (Denmark)

    Houwing, Sjoerd; Hagenzieker, Marjan; Mathijssen, René P.M.

    2013-01-01

    Between 2006 and 2010, six population based case-control studies were conducted as part of the European research-project DRUID (DRiving Under the Influence of Drugs, alcohol and medicines). The aim of these case-control studies was to calculate odds ratios indicating the relative risk of serious....... The list of indicators that was identified in this study is useful both as guidance for systematic reviews and meta-analyses and for future epidemiological studies in the field of driving under the influence to minimize sources of errors already at the start of the study. © 2013 Published by Elsevier Ltd....

  13. Joint position sense error in people with neck pain: A systematic review.

    Science.gov (United States)

    de Vries, J; Ischebeck, B K; Voogt, L P; van der Geest, J N; Janssen, M; Frens, M A; Kleinrensink, G J

    2015-12-01

    Several studies in recent decades have examined the relationship between proprioceptive deficits and neck pain. However, there is no uniform conclusion on the relationship between the two. Clinically, proprioception is evaluated using the Joint Position Sense Error (JPSE), which reflects a person's ability to accurately return his head to a predefined target after a cervical movement. We focused to differentiate between JPSE in people with neck pain compared to healthy controls. Systematic review according to the PRISMA guidelines. Our data sources were Embase, Medline OvidSP, Web of Science, Cochrane Central, CINAHL and Pubmed Publisher. To be included, studies had to compare JPSE of the neck (O) in people with neck pain (P) with JPSE of the neck in healthy controls (C). Fourteen studies were included. Four studies reported that participants with traumatic neck pain had a significantly higher JPSE than healthy controls. Of the eight studies involving people with non-traumatic neck pain, four reported significant differences between the groups. The JPSE did not vary between neck-pain groups. Current literature shows the JPSE to be a relevant measure when it is used correctly. All studies which calculated the JPSE over at least six trials showed a significantly increased JPSE in the neck pain group. This strongly suggests that 'number of repetitions' is a major element in correctly performing the JPSE test. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. The effectiveness of public health interventions to reduce the health impact of climate change: a systematic review of systematic reviews.

    Directory of Open Access Journals (Sweden)

    Maha Bouzid

    Full Text Available BACKGROUND: Climate change is likely to be one of the most important threats to public health in the coming years. Yet despite the large number of papers considering the health impact of climate change, few have considered what public health interventions may be of most value in reducing the disease burden. We aimed to evaluate the effectiveness of public health interventions to reduce the disease burden of high priority climate sensitive diseases. METHODS AND FINDINGS: For each disease, we performed a systematic search with no restriction on date or language of publication on Medline, Web of Knowledge, Cochrane CENTRAL and SCOPUS up to December 2010 to identify systematic reviews of public health interventions. We retrieved some 3176 records of which 85 full papers were assessed and 33 included in the review. The included papers investigated the effect of public health interventions on various outcome measures. All interventions were GRADE assessed to determine the strength of evidence. In addition we developed a systematic review quality score. The interventions included environmental interventions to control vectors, chemoprophylaxis, immunization, household and community water treatment, greening cities and community advice. For most reviews, GRADE showed low quality of evidence because of poor study design and high heterogeneity. Also for some key areas such as floods, droughts and other weather extremes, there are no adequate systematic reviews of potential public health interventions. CONCLUSION: In conclusion, we found the evidence base to be mostly weak for environmental interventions that could have the most value in a warmer world. Nevertheless, such interventions should not be dismissed. Future research on public health interventions for climate change adaptation needs to be concerned about quality in study design and should address the gap for floods, droughts and other extreme weather events that pose a risk to health.

  15. The effectiveness of public health interventions to reduce the health impact of climate change: a systematic review of systematic reviews.

    Science.gov (United States)

    Bouzid, Maha; Hooper, Lee; Hunter, Paul R

    2013-01-01

    Climate change is likely to be one of the most important threats to public health in the coming years. Yet despite the large number of papers considering the health impact of climate change, few have considered what public health interventions may be of most value in reducing the disease burden. We aimed to evaluate the effectiveness of public health interventions to reduce the disease burden of high priority climate sensitive diseases. For each disease, we performed a systematic search with no restriction on date or language of publication on Medline, Web of Knowledge, Cochrane CENTRAL and SCOPUS up to December 2010 to identify systematic reviews of public health interventions. We retrieved some 3176 records of which 85 full papers were assessed and 33 included in the review. The included papers investigated the effect of public health interventions on various outcome measures. All interventions were GRADE assessed to determine the strength of evidence. In addition we developed a systematic review quality score. The interventions included environmental interventions to control vectors, chemoprophylaxis, immunization, household and community water treatment, greening cities and community advice. For most reviews, GRADE showed low quality of evidence because of poor study design and high heterogeneity. Also for some key areas such as floods, droughts and other weather extremes, there are no adequate systematic reviews of potential public health interventions. In conclusion, we found the evidence base to be mostly weak for environmental interventions that could have the most value in a warmer world. Nevertheless, such interventions should not be dismissed. Future research on public health interventions for climate change adaptation needs to be concerned about quality in study design and should address the gap for floods, droughts and other extreme weather events that pose a risk to health.

  16. Local systematic differences in 2MASS positions

    Science.gov (United States)

    Bustos Fierro, I. H.; Calderón, J. H.

    2018-01-01

    We have found that positions in the 2MASS All-sky Catalog of Point Sources show local systematic differences with characteristic length-scales of ˜ 5 to ˜ 8 arcminutes when compared with several catalogs. We have observed that when 2MASS positions are used in the computation of proper motions, the mentioned systematic differences cause systematic errors in the resulting proper motions. We have developed a method to locally rectify 2MASS with respect to UCAC4 in order to diminish the systematic differences between these catalogs. The rectified 2MASS catalog with the proposed method can be regarded as an extension of UCAC4 for astrometry with accuracy ˜ 90 mas in its positions, with negligible systematic errors. Also we show that the use of these rectified positions removes the observed systematic pattern in proper motions derived from original 2MASS positions.

  17. Systematic review of active workplace interventions to reduce sickness absence.

    Science.gov (United States)

    Odeen, M; Magnussen, L H; Maeland, S; Larun, L; Eriksen, H R; Tveito, T H

    2013-01-01

    The workplace is used as a setting for interventions to prevent and reduce sickness absence, regardless of the specific medical conditions and diagnoses. To give an overview of the general effectiveness of active workplace interventions aimed at preventing and reducing sickness absence. We systematically searched PubMed, Embase, Psych-info, and ISI web of knowledge on 27 December 2011. Inclusion criteria were (i) participants over 18 years old with an active role in the intervention, (ii) intervention done partly or fully at the workplace or at the initiative of the workplace and (iii) sickness absence reported. Two reviewers independently screened articles, extracted data and assessed risk of bias. A narrative synthesis was used. We identified 2036 articles of which, 93 were assessed in full text. Seventeen articles were included (2 with low and 15 with medium risk of bias), with a total of 24 comparisons. Five interventions from four articles significantly reduced sickness absence. We found moderate evidence that graded activity reduced sickness absence and limited evidence that the Sheerbrooke model (a comprehensive multidisciplinary intervention) and cognitive behavioural therapy (CBT) reduced sickness absence. There was moderate evidence that workplace education and physical exercise did not reduce sickness absence. For other interventions, the evidence was insufficient to draw conclusions. The review found limited evidence that active workplace interventions were not generally effective in reducing sickness absence, but there was moderate evidence of effect for graded activity and limited evidence for the effectiveness of the Sheerbrooke model and CBT.

  18. Team errors: definition and taxonomy

    International Nuclear Information System (INIS)

    Sasou, Kunihide; Reason, James

    1999-01-01

    In error analysis or error management, the focus is usually upon individuals who have made errors. In large complex systems, however, most people work in teams or groups. Considering this working environment, insufficient emphasis has been given to 'team errors'. This paper discusses the definition of team errors and its taxonomy. These notions are also applied to events that have occurred in the nuclear power industry, aviation industry and shipping industry. The paper also discusses the relations between team errors and Performance Shaping Factors (PSFs). As a result, the proposed definition and taxonomy are found to be useful in categorizing team errors. The analysis also reveals that deficiencies in communication, resource/task management, excessive authority gradient, excessive professional courtesy will cause team errors. Handling human errors as team errors provides an opportunity to reduce human errors

  19. Error correction and degeneracy in surface codes suffering loss

    International Nuclear Information System (INIS)

    Stace, Thomas M.; Barrett, Sean D.

    2010-01-01

    Many proposals for quantum information processing are subject to detectable loss errors. In this paper, we give a detailed account of recent results in which we showed that topological quantum memories can simultaneously tolerate both loss errors and computational errors, with a graceful tradeoff between the threshold for each. We further discuss a number of subtleties that arise when implementing error correction on topological memories. We particularly focus on the role played by degeneracy in the matching algorithms and present a systematic study of its effects on thresholds. We also discuss some of the implications of degeneracy for estimating phase transition temperatures in the random bond Ising model.

  20. Improvement of the physically-based groundwater model simulations through complementary correction of its errors

    Directory of Open Access Journals (Sweden)

    Jorge Mauricio Reyes Alcalde

    2017-04-01

    Full Text Available Physically-Based groundwater Models (PBM, such MODFLOW, are used as groundwater resources evaluation tools supposing that the produced differences (residuals or errors are white noise. However, in the facts these numerical simulations usually show not only random errors but also systematic errors. For this work it has been developed a numerical procedure to deal with PBM systematic errors, studying its structure in order to model its behavior and correct the results by external and complementary means, trough a framework called Complementary Correction Model (CCM. The application of CCM to PBM shows a decrease in local biases, better distribution of errors and reductions in its temporal and spatial correlations, with 73% of reduction in global RMSN over an original PBM. This methodology seems an interesting chance to update a PBM avoiding the work and costs of interfere its internal structure.

  1. Errors in Viking Lander Atmospheric Profiles Discovered Using MOLA Topography

    Science.gov (United States)

    Withers, Paul; Lorenz, R. D.; Neumann, G. A.

    2002-01-01

    Each Viking lander measured a topographic profile during entry. Comparing to MOLA (Mars Orbiter Laser Altimeter), we find a vertical error of 1-2 km in the Viking trajectory. This introduces a systematic error of 10-20% in the Viking densities and pressures at a given altitude. Additional information is contained in the original extended abstract.

  2. Systematic errors in the tables of theoretical total internal conversion coefficients

    International Nuclear Information System (INIS)

    Dragoun, O.; Rysavy, M.

    1992-01-01

    Some of the total internal conversion coefficients presented in widely used tables of Rosel et al (1978 Atom. Data Nucl. Data Tables 21, 291) were found to be erroneous. The errors appear for some low transition energies, all multipolarities, and probably for all elements. The origin of the errors is explained. The subshell conversion coefficients of Rosel et al, where available, agree with our calculations. to within a few percent. (author)

  3. Analysis and reduction of 3D systematic and random setup errors during the simulation and treatment of lung cancer patients with CT-based external beam radiotherapy dose planning.

    NARCIS (Netherlands)

    Boer, H.D. de; Sornsen de Koste, J.R. van; Senan, S.; Visser, A.G.; Heijmen, B.J.M.

    2001-01-01

    PURPOSE: To determine the magnitude of the errors made in (a) the setup of patients with lung cancer on the simulator relative to their intended setup with respect to the planned treatment beams and (b) in the setup of these patients on the treatment unit. To investigate how the systematic component

  4. Radon measurements-discussion of error estimates for selected methods

    International Nuclear Information System (INIS)

    Zhukovsky, Michael; Onischenko, Alexandra; Bastrikov, Vladislav

    2010-01-01

    The main sources of uncertainties for grab sampling, short-term (charcoal canisters) and long term (track detectors) measurements are: systematic bias of reference equipment; random Poisson and non-Poisson errors during calibration; random Poisson and non-Poisson errors during measurements. The origins of non-Poisson random errors during calibration are different for different kinds of instrumental measurements. The main sources of uncertainties for retrospective measurements conducted by surface traps techniques can be divided in two groups: errors of surface 210 Pb ( 210 Po) activity measurements and uncertainties of transfer from 210 Pb surface activity in glass objects to average radon concentration during this object exposure. It's shown that total measurement error of surface trap retrospective technique can be decreased to 35%.

  5. Prebiotics, Prosynbiotics and Synbiotics: Can They Reduce Plasma Oxidative Stress Parameters? A Systematic Review.

    Science.gov (United States)

    Salehi-Abargouei, Amin; Ghiasvand, Reza; Hariri, Mitra

    2017-03-01

    This study assessed the effectiveness of presybiotics, prosybiotics and synbiotics on reducing serum oxidative stress parameters. PubMed/Medline, Ovid, Google Scholar, ISI Web of Science and SCOPUS were searched up to September 2016. English language randomized clinical trials reporting the effect of presybiotics, prosybiotics or synbiotic interventions on serum oxidative stress parameters in human adults were included. Twenty-one randomized clinical trials met the inclusion criteria for systematic review. Two studies investigated prebiotics, four studies synbiotics and fifteen studies probiotics. According to our systematic review, prebiotic could decrease malondialdehyde and increase superoxidative dismutase, but evidence is not enough. In comparison with fructo-oligosaccharide, inulin is much more useful for oxidative stress reduction. Using probiotics with dairy products could reduce oxidative stress significantly, but probiotic in form of supplementation did not have any effect on oxidative stress. There is limited but supportive evidence that presybiotics, prosybiotics and synbiotics are effective for reducing oxidative stress parameters. Further randomized clinical trials with longer duration of intervention especially on population with increased oxidative stress are needed to provide more definitive results before any recommendation for clinical use of these interventions.

  6. [The approaches to factors which cause medication error--from the analyses of many near-miss cases related to intravenous medication which nurses experienced].

    Science.gov (United States)

    Kawamura, H

    2001-03-01

    Given the complexity of the intravenous medication process, systematic thinking is essential to reduce medication errors. Two thousand eight hundred cases of 'Hiyari-Hatto' were analyzed. Eight important factors which cause intravenous medication error were clarified as a result. In the following I summarize the systematic approach for each factor. 1. Failed communication of information: illegible handwritten orders, and inaccurate verbal orders and copying cause medication error. Rules must be established to prevent miscommunication. 2. Error-prone design of the hardware: Look-alike packaging and labeling of drugs and the poor design of infusion pumps cause errors. The human-hardware interface should be improved by error-resistant design by manufacturers. 3. Patient names similar to simultaneously operating surgical procedures and interventions: This factor causes patient misidentification. Automated identification devices should be introduced into health care settings. 4. Interruption in the middle of tasks: The efficient assignment of medical work and business work should be made. 5. Inaccurate mixing procedure and insufficient mixing space: Mixing procedures must be standardized and the layout of the working space must be examined. 6. Time pressure: Mismatch between workload and manpower should be improved by reconsidering the work to be done. 7. Lack of information about high alert medications: The pharmacist should play a greater role in the medication process overall. 8. Poor knowledge and skill of recent graduates: Training methods and tools to prevent medication errors must be developed.

  7. Local blur analysis and phase error correction method for fringe projection profilometry systems.

    Science.gov (United States)

    Rao, Li; Da, Feipeng

    2018-05-20

    We introduce a flexible error correction method for fringe projection profilometry (FPP) systems in the presence of local blur phenomenon. Local blur caused by global light transport such as camera defocus, projector defocus, and subsurface scattering will cause significant systematic errors in FPP systems. Previous methods, which adopt high-frequency patterns to separate the direct and global components, fail when the global light phenomenon occurs locally. In this paper, the influence of local blur on phase quality is thoroughly analyzed, and a concise error correction method is proposed to compensate the phase errors. For defocus phenomenon, this method can be directly applied. With the aid of spatially varying point spread functions and local frontal plane assumption, experiments show that the proposed method can effectively alleviate the system errors and improve the final reconstruction accuracy in various scenes. For a subsurface scattering scenario, if the translucent object is dominated by multiple scattering, the proposed method can also be applied to correct systematic errors once the bidirectional scattering-surface reflectance distribution function of the object material is measured.

  8. Measuring nuclear-spin-dependent parity violation with molecules: Experimental methods and analysis of systematic errors

    Science.gov (United States)

    Altuntaş, Emine; Ammon, Jeffrey; Cahn, Sidney B.; DeMille, David

    2018-04-01

    Nuclear-spin-dependent parity violation (NSD-PV) effects in atoms and molecules arise from Z0 boson exchange between electrons and the nucleus and from the magnetic interaction between electrons and the parity-violating nuclear anapole moment. It has been proposed to study NSD-PV effects using an enhancement of the observable effect in diatomic molecules [D. DeMille et al., Phys. Rev. Lett. 100, 023003 (2008), 10.1103/PhysRevLett.100.023003]. Here we demonstrate highly sensitive measurements of this type, using the test system 138Ba19F. We show that systematic errors associated with our technique can be suppressed to at least the level of the present statistical sensitivity. With ˜170 h of data, we measure the matrix element W of the NSD-PV interaction with uncertainty δ W /(2 π )<0.7 Hz for each of two configurations where W must have different signs. This sensitivity would be sufficient to measure NSD-PV effects of the size anticipated across a wide range of nuclei.

  9. Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers

    Directory of Open Access Journals (Sweden)

    Zheng You

    2013-04-01

    Full Text Available The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers.

  10. Optical system error analysis and calibration method of high-accuracy star trackers.

    Science.gov (United States)

    Sun, Ting; Xing, Fei; You, Zheng

    2013-04-08

    The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers.

  11. Writing errors by adults and by children

    NARCIS (Netherlands)

    Nes, van F.L.

    1984-01-01

    Writing errors are defined as occasional deviations from a person' s normal handwriting; thus they are different from spelling mistakes. The deviations are systematic in nature to a certain degree and can therefore be quantitatively classified in accordance with (1) type and (2) location in a word.

  12. How are medication errors defined? A systematic literature review of definitions and characteristics

    DEFF Research Database (Denmark)

    Lisby, Marianne; Nielsen, L P; Brock, Birgitte

    2010-01-01

    Multiplicity in terminology has been suggested as a possible explanation for the variation in the prevalence of medication errors. So far, few empirical studies have challenged this assertion. The objective of this review was, therefore, to describe the extent and characteristics of medication er...... error definitions in hospitals and to consider the consequences for measuring the prevalence of medication errors....

  13. The effectiveness of e-Interventions on reducing social isolation in older persons: A systematic review of systematic reviews.

    Science.gov (United States)

    Chipps, Jennifer; Jarvis, Mary Ann; Ramlall, Suvira

    2017-12-01

    As the older adult population group has been increasing in size, there has been evidence of growing social isolation and loneliness in their lives. The increased use of information communication technology and Internet-supported interventions has stimulated an interest in the benefits of e-Interventions for older people and specifically in having a role in increasing social networks and decreasing loneliness. A systematic review of e-Interventions to reduce loneliness in older people was conducted with the aim to synthesize high quality evidence on the effectiveness of e-Interventions to decrease social isolation/loneliness for older people living in community/residential care. A systematic search of 12 databases for reviews published between 2000-2017 was conducted using search term synonyms for older people, social isolation and interventions. Three independent researchers screened articles and two reviewers extracted data. The Revised-Assessment of Multiple Systematic Reviews was used to assess the quality of reviews. The final search identified 12 reviews, which included 22 unique primary research studies evaluating e-Interventions for social isolation or loneliness. The reviews were of moderate quality and the primary studies showed a lack of rigor. Loneliness was most frequently measured using the University California Los Angeles Loneliness Scale. Despite the limitations of the reviewed studies, there is inconsistent and weak evidence on using e-Interventions for loneliness in older people.

  14. Electronic error-reporting systems: a case study into the impact on nurse reporting of medical errors.

    Science.gov (United States)

    Lederman, Reeva; Dreyfus, Suelette; Matchan, Jessica; Knott, Jonathan C; Milton, Simon K

    2013-01-01

    Underreporting of errors in hospitals persists despite the claims of technology companies that electronic systems will facilitate reporting. This study builds on previous analyses to examine error reporting by nurses in hospitals using electronic media. This research asks whether the electronic media creates additional barriers to error reporting, and, if so, what practical steps can all hospitals take to reduce these barriers. This is a mixed-method case study nurses' use of an error reporting system, RiskMan, in two hospitals. The case study involved one large private hospital and one large public hospital in Victoria, Australia, both of which use the RiskMan medical error reporting system. Information technology-based error reporting systems have unique access problems and time demands and can encourage nurses to develop alternative reporting mechanisms. This research focuses on nurses and raises important findings for hospitals using such systems or considering installation. This article suggests organizational and technical responses that could reduce some of the identified barriers. Crown Copyright © 2013. Published by Mosby, Inc. All rights reserved.

  15. Considering the role of time budgets on copy-error rates in material culture traditions: an experimental assessment.

    Science.gov (United States)

    Schillinger, Kerstin; Mesoudi, Alex; Lycett, Stephen J

    2014-01-01

    Ethnographic research highlights that there are constraints placed on the time available to produce cultural artefacts in differing circumstances. Given that copying error, or cultural 'mutation', can have important implications for the evolutionary processes involved in material culture change, it is essential to explore empirically how such 'time constraints' affect patterns of artefactual variation. Here, we report an experiment that systematically tests whether, and how, varying time constraints affect shape copying error rates. A total of 90 participants copied the shape of a 3D 'target handaxe form' using a standardized foam block and a plastic knife. Three distinct 'time conditions' were examined, whereupon participants had either 20, 15, or 10 minutes to complete the task. One aim of this study was to determine whether reducing production time produced a proportional increase in copy error rates across all conditions, or whether the concept of a task specific 'threshold' might be a more appropriate manner to model the effect of time budgets on copy-error rates. We found that mean levels of shape copying error increased when production time was reduced. However, there were no statistically significant differences between the 20 minute and 15 minute conditions. Significant differences were only obtained between conditions when production time was reduced to 10 minutes. Hence, our results more strongly support the hypothesis that the effects of time constraints on copying error are best modelled according to a 'threshold' effect, below which mutation rates increase more markedly. Our results also suggest that 'time budgets' available in the past will have generated varying patterns of shape variation, potentially affecting spatial and temporal trends seen in the archaeological record. Hence, 'time-budgeting' factors need to be given greater consideration in evolutionary models of material culture change.

  16. A generalized, periodic nonlinearity-reduced interferometer for straightness measurements

    International Nuclear Information System (INIS)

    Wu Chienming

    2008-01-01

    Periodic nonlinearity is a systematic error limiting the accuracy of displacement measurements at the nanometer level. However, an interferometer with a displacement measurement accuracy of less than 1 nm is required in nanometrology and in fundamental scientific research. To meet this requirement, a generalized, periodic nonlinearity-reduced interferometer, based on three construction principles has been developed for straightness measurements. These three construction principles have resulted in an interferometer with a highly stable design with reduced periodic nonlinearity. Verifications by a straightness interferometer have demonstrated that the periodic nonlinearity was less than 40 pm. The results also demonstrate that the interferometer design is capable of subnanometer accuracy and is useful in nanometrology

  17. Global Warming Estimation from MSU: Correction for Drift and Calibration Errors

    Science.gov (United States)

    Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have about 7am/7pm orbital geometry) and afternoon satellites (NOAA 7, 9, 11 and 14 that have about 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error. We find we can decrease the global temperature trend by about 0.07 K/decade. In addition there are systematic time dependent errors present in the data that are introduced by the drift in the satellite orbital geometry arises from the diurnal cycle in temperature which is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observations made in the MSU Ch 1 (50.3 GHz) support this approach. The error is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the errors on the global temperature trend. In one path the

  18. Correcting systematic errors in high-sensitivity deuteron polarization measurements

    NARCIS (Netherlands)

    Brantjes, N. P. M.; Dzordzhadze, V.; Gebel, R.; Gonnella, F.; Gray, F. E.; van der Hoek, D. J.; Imig, A.; Kruithof, W. L.; Lazarus, D. M.; Lehrach, A.; Lorentz, B.; Messi, R.; Moricciani, D.; Morse, W. M.; Noid, G. A.; Onderwater, C. J. G.; Ozben, C. S.; Prasuhn, D.; Sandri, P. Levi; Semertzidis, Y. K.; da Silva e Silva, M.; Stephenson, E. J.; Stockhorst, H.; Venanzoni, G.; Versolato, O. O.

    2012-01-01

    This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY

  19. Quantum theory of anharmonic oscillators - a variational and systematic general approximation method

    International Nuclear Information System (INIS)

    Yamazaki, K.; Kyoto Univ.

    1984-01-01

    The paper investigates the energy levels and wavefunctions of an anharmonic oscillator characterised by the potential 1/2ω 2 q 2 +lambdaq 4 . As a lowest-order approximation an extremely simple formula for energy levels, Esub(i)sup(0) = (i+1/2)1/4(3/αsub(i)+αsub(i)), is derived (i being the quantum number of the energy level). This formula reproduces the exact energy levels within an error of about 1%. Systematically higher orders of the present perturbation theory are developed. The present second-order perturbation theory reduces the errors of the lowest-order results by a factor of about 1/5 in general. Various ranges (large, intermediate, small) of (i, lambda) are investigated and compared with the exact values obtained by other workers. For i = 0, 1, even the fourth-order perturbation calculation can be elaborated explicitly, which reduces the error to about 0.01% for any lambda. For small lambda it gives correct numerical coefficients up to lambda 4 terms, as it should. (author)

  20. Trauma Quality Improvement: Reducing Triage Errors by Automating the Level Assignment Process.

    Science.gov (United States)

    Stonko, David P; O Neill, Dillon C; Dennis, Bradley M; Smith, Melissa; Gray, Jeffrey; Guillamondegui, Oscar D

    2018-04-12

    Trauma patients are triaged by the severity of their injury or need for intervention while en route to the trauma center according to trauma activation protocols that are institution specific. Significant research has been aimed at improving these protocols in order to optimize patient outcomes while striving for efficiency in care. However, it is known that patients are often undertriaged or overtriaged because protocol adherence remains imperfect. The goal of this quality improvement (QI) project was to improve this adherence, and thereby reduce the triage error. It was conducted as part of the formal undergraduate medical education curriculum at this institution. A QI team was assembled and baseline data were collected, then 2 Plan-Do-Study-Act (PDSA) cycles were implemented sequentially. During the first cycle, a novel web tool was developed and implemented in order to automate the level assignment process (it takes EMS-provided data and automatically determines the level); the tool was based on the existing trauma activation protocol. The second PDSA cycle focused on improving triage accuracy in isolated, less than 10% total body surface area burns, which we identified to be a point of common error. Traumas were reviewed and tabulated at the end of each PDSA cycle, and triage accuracy was followed with a run chart. This study was performed at Vanderbilt University Medical Center and Medical School, which has a large level 1 trauma center covering over 75,000 square miles, and which sees urban, suburban, and rural trauma. The baseline assessment period and each PDSA cycle lasted 2 weeks. During this time, all activated, adult, direct traumas were reviewed. There were 180 patients during the baseline period, 189 after the first test of change, and 150 after the second test of change. All were included in analysis. Of 180 patients, 30 were inappropriately triaged during baseline analysis (3 undertriaged and 27 overtriaged) versus 16 of 189 (3 undertriaged and 13

  1. Parent-based interventions for preventing or reducing adolescent substance use - A systematic literature review.

    Science.gov (United States)

    Kuntsche, Sandra; Kuntsche, Emmanuel

    2016-04-01

    Despite the increasing relevance of peers, parents remain important socializing agents for their adolescent children and are therefore promising agents for inclusion in prevention or intervention programs. This systematic review provides an overview of the effectiveness of parent-based programs in preventing, curbing or reducing substance use (i.e. alcohol, tobacco and cannabis) among 10 to 18-year-olds. The databases PubMed, PsychInfo, Eric and Google Scholar were used to identify randomized trials published within the past 12years evaluating effects on adolescent substance use. Of the 653 identified in the first screening, 39 publications dealing with 13 programs were included. Results reveal desirable effects of parenting measures such as rule-setting, monitoring and parent-child communication. There was also some evidence in terms of preventing, curbing or reducing adolescent substance use. However, this appears to depend particularly on the age group of the adolescents in question, the kind of parents included and the intensity of the program. To conclude, the results of this systematic review underline the importance of including parents in programs aiming to impede initiation of substance use or curb or reduce already existing substance use in adolescence. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. SU-E-T-114: Analysis of MLC Errors On Gamma Pass Rates for Patient-Specific and Conventional Phantoms

    Energy Technology Data Exchange (ETDEWEB)

    Sterling, D; Ehler, E [University of Minnesota, Minneapolis, MN (United States)

    2015-06-15

    Purpose: To evaluate whether a 3D patient-specific phantom is better able to detect known MLC errors in a clinically delivered treatment plan than conventional phantoms. 3D printing may make fabrication of such phantoms feasible. Methods: Two types of MLC errors were introduced into a clinically delivered, non-coplanar IMRT, partial brain treatment plan. First, uniformly distributed random errors of up to 3mm, 2mm, and 1mm were introduced into the MLC positions for each field. Second, systematic MLC-bank position errors of 5mm, 3.5mm, and 2mm due to simulated effects of gantry and MLC sag were introduced. The original plan was recalculated with these errors on the original CT dataset as well as cylindrical and planar IMRT QA phantoms. The original dataset was considered to be a perfect 3D patient-specific phantom. The phantoms were considered to be ideal 3D dosimetry systems with no resolution limitations. Results: Passing rates for Gamma Index (3%/3mm and no dose threshold) were calculated on the 3D phantom, cylindrical phantom, and both on a composite and field-by-field basis for the planar phantom. Pass rates for 5mm systematic and 3mm random error were 86.0%, 89.6%, 98% and 98.3% respectively. For 3.5mm systematic and 2mm random error the pass rates were 94.7%, 96.2%, 99.2% and 99.2% respectively. For 2mm systematic error with 1mm random error the pass rates were 99.9%, 100%, 100% and 100% respectively. Conclusion: A 3D phantom with the patient anatomy is able to discern errors, both severe and subtle, that are not seen using conventional phantoms. Therefore, 3D phantoms may be beneficial for commissioning new treatment machines and modalities, patient-specific QA and end-to-end testing.

  3. SU-E-T-114: Analysis of MLC Errors On Gamma Pass Rates for Patient-Specific and Conventional Phantoms

    International Nuclear Information System (INIS)

    Sterling, D; Ehler, E

    2015-01-01

    Purpose: To evaluate whether a 3D patient-specific phantom is better able to detect known MLC errors in a clinically delivered treatment plan than conventional phantoms. 3D printing may make fabrication of such phantoms feasible. Methods: Two types of MLC errors were introduced into a clinically delivered, non-coplanar IMRT, partial brain treatment plan. First, uniformly distributed random errors of up to 3mm, 2mm, and 1mm were introduced into the MLC positions for each field. Second, systematic MLC-bank position errors of 5mm, 3.5mm, and 2mm due to simulated effects of gantry and MLC sag were introduced. The original plan was recalculated with these errors on the original CT dataset as well as cylindrical and planar IMRT QA phantoms. The original dataset was considered to be a perfect 3D patient-specific phantom. The phantoms were considered to be ideal 3D dosimetry systems with no resolution limitations. Results: Passing rates for Gamma Index (3%/3mm and no dose threshold) were calculated on the 3D phantom, cylindrical phantom, and both on a composite and field-by-field basis for the planar phantom. Pass rates for 5mm systematic and 3mm random error were 86.0%, 89.6%, 98% and 98.3% respectively. For 3.5mm systematic and 2mm random error the pass rates were 94.7%, 96.2%, 99.2% and 99.2% respectively. For 2mm systematic error with 1mm random error the pass rates were 99.9%, 100%, 100% and 100% respectively. Conclusion: A 3D phantom with the patient anatomy is able to discern errors, both severe and subtle, that are not seen using conventional phantoms. Therefore, 3D phantoms may be beneficial for commissioning new treatment machines and modalities, patient-specific QA and end-to-end testing

  4. Reducing Individual Variation for fMRI Studies in Children by Minimizing Template Related Errors.

    Directory of Open Access Journals (Sweden)

    Jian Weng

    Full Text Available Spatial normalization is an essential process for group comparisons in functional MRI studies. In practice, there is a risk of normalization errors particularly in studies involving children, seniors or diseased populations and in regions with high individual variation. One way to minimize normalization errors is to create a study-specific template based on a large sample size. However, studies with a large sample size are not always feasible, particularly for children studies. The performance of templates with a small sample size has not been evaluated in fMRI studies in children. In the current study, this issue was encountered in a working memory task with 29 children in two groups. We compared the performance of different templates: a study-specific template created by the experimental population, a Chinese children template and the widely used adult MNI template. We observed distinct differences in the right orbitofrontal region among the three templates in between-group comparisons. The study-specific template and the Chinese children template were more sensitive for the detection of between-group differences in the orbitofrontal cortex than the MNI template. Proper templates could effectively reduce individual variation. Further analysis revealed a correlation between the BOLD contrast size and the norm index of the affine transformation matrix, i.e., the SFN, which characterizes the difference between a template and a native image and differs significantly across subjects. Thereby, we proposed and tested another method to reduce individual variation that included the SFN as a covariate in group-wise statistics. This correction exhibits outstanding performance in enhancing detection power in group-level tests. A training effect of abacus-based mental calculation was also demonstrated, with significantly elevated activation in the right orbitofrontal region that correlated with behavioral response time across subjects in the trained group.

  5. Towards eliminating systematic errors caused by the experimental conditions in Biochemical Methane Potential (BMP) tests

    Energy Technology Data Exchange (ETDEWEB)

    Strömberg, Sten, E-mail: sten.stromberg@biotek.lu.se [Department of Biotechnology, Lund University, Getingevägen 60, 221 00 Lund (Sweden); Nistor, Mihaela, E-mail: mn@bioprocesscontrol.com [Bioprocess Control, Scheelevägen 22, 223 63 Lund (Sweden); Liu, Jing, E-mail: jing.liu@biotek.lu.se [Department of Biotechnology, Lund University, Getingevägen 60, 221 00 Lund (Sweden); Bioprocess Control, Scheelevägen 22, 223 63 Lund (Sweden)

    2014-11-15

    Highlights: • The evaluated factors introduce significant systematic errors (10–38%) in BMP tests. • Ambient temperature (T) has the most substantial impact (∼10%) at low altitude. • Ambient pressure (p) has the most substantial impact (∼68%) at high altitude. • Continuous monitoring of T and p is not necessary for kinetic calculations. - Abstract: The Biochemical Methane Potential (BMP) test is increasingly recognised as a tool for selecting and pricing biomass material for production of biogas. However, the results for the same substrate often differ between laboratories and much work to standardise such tests is still needed. In the current study, the effects from four environmental factors (i.e. ambient temperature and pressure, water vapour content and initial gas composition of the reactor headspace) on the degradation kinetics and the determined methane potential were evaluated with a 2{sup 4} full factorial design. Four substrates, with different biodegradation profiles, were investigated and the ambient temperature was found to be the most significant contributor to errors in the methane potential. Concerning the kinetics of the process, the environmental factors’ impact on the calculated rate constants was negligible. The impact of the environmental factors on the kinetic parameters and methane potential from performing a BMP test at different geographical locations around the world was simulated by adjusting the data according to the ambient temperature and pressure of some chosen model sites. The largest effect on the methane potential was registered from tests performed at high altitudes due to a low ambient pressure. The results from this study illustrate the importance of considering the environmental factors’ influence on volumetric gas measurement in BMP tests. This is essential to achieve trustworthy and standardised results that can be used by researchers and end users from all over the world.

  6. Towards eliminating systematic errors caused by the experimental conditions in Biochemical Methane Potential (BMP) tests

    International Nuclear Information System (INIS)

    Strömberg, Sten; Nistor, Mihaela; Liu, Jing

    2014-01-01

    Highlights: • The evaluated factors introduce significant systematic errors (10–38%) in BMP tests. • Ambient temperature (T) has the most substantial impact (∼10%) at low altitude. • Ambient pressure (p) has the most substantial impact (∼68%) at high altitude. • Continuous monitoring of T and p is not necessary for kinetic calculations. - Abstract: The Biochemical Methane Potential (BMP) test is increasingly recognised as a tool for selecting and pricing biomass material for production of biogas. However, the results for the same substrate often differ between laboratories and much work to standardise such tests is still needed. In the current study, the effects from four environmental factors (i.e. ambient temperature and pressure, water vapour content and initial gas composition of the reactor headspace) on the degradation kinetics and the determined methane potential were evaluated with a 2 4 full factorial design. Four substrates, with different biodegradation profiles, were investigated and the ambient temperature was found to be the most significant contributor to errors in the methane potential. Concerning the kinetics of the process, the environmental factors’ impact on the calculated rate constants was negligible. The impact of the environmental factors on the kinetic parameters and methane potential from performing a BMP test at different geographical locations around the world was simulated by adjusting the data according to the ambient temperature and pressure of some chosen model sites. The largest effect on the methane potential was registered from tests performed at high altitudes due to a low ambient pressure. The results from this study illustrate the importance of considering the environmental factors’ influence on volumetric gas measurement in BMP tests. This is essential to achieve trustworthy and standardised results that can be used by researchers and end users from all over the world

  7. Tolerable systematic errors in Really Large Hadron Collider dipoles

    International Nuclear Information System (INIS)

    Peggs, S.; Dell, F.

    1996-01-01

    Maximum allowable systematic harmonics for arc dipoles in a Really Large Hadron Collider are derived. The possibility of half cell lengths much greater than 100 meters is justified. A convenient analytical model evaluating horizontal tune shifts is developed, and tested against a sample high field collider

  8. Effectiveness of Toyota process redesign in reducing thyroid gland fine-needle aspiration error.

    Science.gov (United States)

    Raab, Stephen S; Grzybicki, Dana Marie; Sudilovsky, Daniel; Balassanian, Ronald; Janosky, Janine E; Vrbin, Colleen M

    2006-10-01

    Our objective was to determine whether the Toyota Production System process redesign resulted in diagnostic error reduction for patients who underwent cytologic evaluation of thyroid nodules. In this longitudinal, nonconcurrent cohort study, we compared the diagnostic error frequency of a thyroid aspiration service before and after implementation of error reduction initiatives consisting of adoption of a standardized diagnostic terminology scheme and an immediate interpretation service. A total of 2,424 patients underwent aspiration. Following terminology standardization, the false-negative rate decreased from 41.8% to 19.1% (P = .006), the specimen nondiagnostic rate increased from 5.8% to 19.8% (P Toyota process change led to significantly fewer diagnostic errors for patients who underwent thyroid fine-needle aspiration.

  9. A channel-by-channel method of reducing the errors associated with peak area integration

    International Nuclear Information System (INIS)

    Luedeke, T.P.; Tripard, G.E.

    1996-01-01

    A new method of reducing the errors associated with peak area integration has been developed. This method utilizes the signal content of each channel as an estimate of the overall peak area. These individual estimates can then be weighted according to the precision with which each estimate is known, producing an overall area estimate. Experimental measurements were performed on a small peak sitting on a large background, and the results compared to those obtained from a commercial software program. Results showed a marked decrease in the spread of results around the true value (obtained by counting for a long period of time), and a reduction in the statistical uncertainty associated with the peak area. (orig.)

  10. Heuristic errors in clinical reasoning.

    Science.gov (United States)

    Rylander, Melanie; Guerrasio, Jeannette

    2016-08-01

    Errors in clinical reasoning contribute to patient morbidity and mortality. The purpose of this study was to determine the types of heuristic errors made by third-year medical students and first-year residents. This study surveyed approximately 150 clinical educators inquiring about the types of heuristic errors they observed in third-year medical students and first-year residents. Anchoring and premature closure were the two most common errors observed amongst third-year medical students and first-year residents. There was no difference in the types of errors observed in the two groups. Errors in clinical reasoning contribute to patient morbidity and mortality Clinical educators perceived that both third-year medical students and first-year residents committed similar heuristic errors, implying that additional medical knowledge and clinical experience do not affect the types of heuristic errors made. Further work is needed to help identify methods that can be used to reduce heuristic errors early in a clinician's education. © 2015 John Wiley & Sons Ltd.

  11. Analysis of possible systematic errors in the Oslo method

    International Nuclear Information System (INIS)

    Larsen, A. C.; Guttormsen, M.; Buerger, A.; Goergen, A.; Nyhus, H. T.; Rekstad, J.; Siem, S.; Toft, H. K.; Tveten, G. M.; Wikan, K.; Krticka, M.; Betak, E.; Schiller, A.; Voinov, A. V.

    2011-01-01

    In this work, we have reviewed the Oslo method, which enables the simultaneous extraction of the level density and γ-ray transmission coefficient from a set of particle-γ coincidence data. Possible errors and uncertainties have been investigated. Typical data sets from various mass regions as well as simulated data have been tested against the assumptions behind the data analysis.

  12. Neutron-induced soft errors in CMOS circuits

    International Nuclear Information System (INIS)

    Hazucha, P.

    1999-01-01

    The subject of this thesis is a systematic study of soft errors occurring in CMOS integrated circuits when being exposed to radiation. The vast majority of commercial circuits operate in the natural environment ranging from the sea level to aircraft flight altitudes (less than 20 km), where the errors are caused mainly by interaction of atmospheric neutrons with silicon. Initially, the soft error rate (SER) of a static memory was measured for supply voltages from 2V to 5V when irradiated by 14 MeV and 100 MeV neutrons. Increased error rate due to the decreased supply voltage has been identified as a potential hazard for operation of future low-voltage circuits. A novel methodology was proposed for accurate SER characterization of a manufacturing process and it was validated by measurements on a 0.6 μm process and 100 MeV neutrons. The methodology can be applied to the prediction of SER in the natural environment

  13. Learning a locomotor task: with or without errors?

    Science.gov (United States)

    Marchal-Crespo, Laura; Schneider, Jasmin; Jaeger, Lukas; Riener, Robert

    2014-03-04

    Robotic haptic guidance is the most commonly used robotic training strategy to reduce performance errors while training. However, research on motor learning has emphasized that errors are a fundamental neural signal that drive motor adaptation. Thus, researchers have proposed robotic therapy algorithms that amplify movement errors rather than decrease them. However, to date, no study has analyzed with precision which training strategy is the most appropriate to learn an especially simple task. In this study, the impact of robotic training strategies that amplify or reduce errors on muscle activation and motor learning of a simple locomotor task was investigated in twenty two healthy subjects. The experiment was conducted with the MAgnetic Resonance COmpatible Stepper (MARCOS) a special robotic device developed for investigations in the MR scanner. The robot moved the dominant leg passively and the subject was requested to actively synchronize the non-dominant leg to achieve an alternating stepping-like movement. Learning with four different training strategies that reduce or amplify errors was evaluated: (i) Haptic guidance: errors were eliminated by passively moving the limbs, (ii) No guidance: no robot disturbances were presented, (iii) Error amplification: existing errors were amplified with repulsive forces, (iv) Noise disturbance: errors were evoked intentionally with a randomly-varying force disturbance on top of the no guidance strategy. Additionally, the activation of four lower limb muscles was measured by the means of surface electromyography (EMG). Strategies that reduce or do not amplify errors limit muscle activation during training and result in poor learning gains. Adding random disturbing forces during training seems to increase attention, and therefore improve motor learning. Error amplification seems to be the most suitable strategy for initially less skilled subjects, perhaps because subjects could better detect their errors and correct them

  14. Convolution method and CTV-to-PTV margins for finite fractions and small systematic errors

    International Nuclear Information System (INIS)

    Gordon, J J; Siebers, J V

    2007-01-01

    The van Herk margin formula (VHMF) relies on the accuracy of the convolution method (CM) to determine clinical target volume (CTV) to planning target volume (PTV) margins. This work (1) evaluates the accuracy of the CM and VHMF as a function of the number of fractions N and other parameters, and (2) proposes an alternative margin algorithm which ensures target coverage for a wider range of parameter values. Dose coverage was evaluated for a spherical target with uniform margin, using the same simplified dose model and CTV coverage criterion as were used in development of the VHMF. Systematic and random setup errors were assumed to be normally distributed with standard deviations Σ and σ. For clinically relevant combinations of σ, Σ and N, margins were determined by requiring that 90% of treatment course simulations have a CTV minimum dose greater than or equal to the static PTV minimum dose. Simulation results were compared with the VHMF and the alternative margin algorithm. The CM and VHMF were found to be accurate for parameter values satisfying the approximate criterion: σ[1 - γN/25] 0.2, because they failed to account for the non-negligible dose variability associated with random setup errors. These criteria are applicable when σ ∼> σ P , where σ P = 0.32 cm is the standard deviation of the normal dose penumbra. (Qualitative behaviour of the CM and VHMF will remain the same, though the criteria might vary if σ P takes values other than 0.32 cm.) When σ P , dose variability due to random setup errors becomes negligible, and the CM and VHMF are valid regardless of the values of Σ and N. When σ ∼> σ P , consistent with the above criteria, it was found that the VHMF can underestimate margins for large σ, small Σ and small N. A potential consequence of this underestimate is that the CTV minimum dose can fall below its planned value in more than the prescribed 10% of treatments. The proposed alternative margin algorithm provides better margin

  15. Slotted rotatable target assembly and systematic error analysis for a search for long range spin dependent interactions from exotic vector boson exchange using neutron spin rotation

    Science.gov (United States)

    Haddock, C.; Crawford, B.; Fox, W.; Francis, I.; Holley, A.; Magers, S.; Sarsour, M.; Snow, W. M.; Vanderwerp, J.

    2018-03-01

    We discuss the design and construction of a novel target array of nonmagnetic test masses used in a neutron polarimetry measurement made in search for new possible exotic spin dependent neutron-atominteractions of Nature at sub-mm length scales. This target was designed to accept and efficiently transmit a transversely polarized slow neutron beam through a series of long open parallel slots bounded by flat rectangular plates. These openings possessed equal atom density gradients normal to the slots from the flat test masses with dimensions optimized to achieve maximum sensitivity to an exotic spin-dependent interaction from vector boson exchanges with ranges in the mm - μm regime. The parallel slots were oriented differently in four quadrants that can be rotated about the neutron beam axis in discrete 90°increments using a Geneva drive. The spin rotation signals from the 4 quadrants were measured using a segmented neutron ion chamber to suppress possible systematic errors from stray magnetic fields in the target region. We discuss the per-neutron sensitivity of the target to the exotic interaction, the design constraints, the potential sources of systematic errors which could be present in this design, and our estimate of the achievable sensitivity using this method.

  16. GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology

    Energy Technology Data Exchange (ETDEWEB)

    Mandelbaum, R.; Rowe, B.; Armstrong, R.; Bard, D.; Bertin, E.; Bosch, J.; Boutigny, D.; Courbin, F.; Dawson, W. A.; Donnarumma, A.; Fenech Conti, I.; Gavazzi, R.; Gentile, M.; Gill, M. S. S.; Hogg, D. W.; Huff, E. M.; Jee, M. J.; Kacprzak, T.; Kilbinger, M.; Kuntzer, T.; Lang, D.; Luo, W.; March, M. C.; Marshall, P. J.; Meyers, J. E.; Miller, L.; Miyatake, H.; Nakajima, R.; Ngole Mboula, F. M.; Nurbaeva, G.; Okura, Y.; Paulin-Henriksson, S.; Rhodes, J.; Schneider, M. D.; Shan, H.; Sheldon, E. S.; Simet, M.; Starck, J. -L.; Sureau, F.; Tewes, M.; Zarb Adami, K.; Zhang, J.; Zuntz, J.

    2015-05-01

    We present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods’ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.

  17. Eliciting the Functional Processes of Apologizing for Errors in Health Care: Developing an Explanatory Model of Apology.

    Science.gov (United States)

    Prothero, Marie M; Morse, Janice M

    2017-01-01

    The purpose of this article was to analyze the concept development of apology in the context of errors in health care, the administrative response, policy and format/process of the subsequent apology. Using pragmatic utility and a systematic review of the literature, 29 articles and one book provided attributes involved in apologizing. Analytic questions were developed to guide the data synthesis and types of apologies used in different circumstances identified. The antecedents of apologizing, and the attributes and outcomes were identified. A model was constructed illustrating the components of a complete apology, other types of apologies, and ramifications/outcomes of each. Clinical implications of developing formal policies for correcting medical errors through apologies are recommended. Defining the essential elements of apology is the first step in establishing a just culture in health care. Respect for patient-centered care reduces the retaliate consequences following an error, and may even restore the physician patient relationship.

  18. Phase Retrieval Using a Genetic Algorithm on the Systematic Image-Based Optical Alignment Testbed

    Science.gov (United States)

    Taylor, Jaime R.

    2003-01-01

    NASA s Marshall Space Flight Center s Systematic Image-Based Optical Alignment (SIBOA) Testbed was developed to test phase retrieval algorithms and hardware techniques. Individuals working with the facility developed the idea of implementing phase retrieval by breaking the determination of the tip/tilt of each mirror apart from the piston motion (or translation) of each mirror. Presented in this report is an algorithm that determines the optimal phase correction associated only with the piston motion of the mirrors. A description of the Phase Retrieval problem is first presented. The Systematic Image-Based Optical Alignment (SIBOA) Testbeb is then described. A Discrete Fourier Transform (DFT) is necessary to transfer the incoming wavefront (or estimate of phase error) into the spatial frequency domain to compare it with the image. A method for reducing the DFT to seven scalar/matrix multiplications is presented. A genetic algorithm is then used to search for the phase error. The results of this new algorithm on a test problem are presented.

  19. Error estimation and global fitting in transverse-relaxation dispersion experiments to determine chemical-exchange parameters

    International Nuclear Information System (INIS)

    Ishima, Rieko; Torchia, Dennis A.

    2005-01-01

    Off-resonance effects can introduce significant systematic errors in R 2 measurements in constant-time Carr-Purcell-Meiboom-Gill (CPMG) transverse relaxation dispersion experiments. For an off-resonance chemical shift of 500 Hz, 15 N relaxation dispersion profiles obtained from experiment and computer simulation indicated a systematic error of ca. 3%. This error is three- to five-fold larger than the random error in R 2 caused by noise. Good estimates of total R 2 uncertainty are critical in order to obtain accurate estimates in optimized chemical exchange parameters and their uncertainties derived from χ 2 minimization of a target function. Here, we present a simple empirical approach that provides a good estimate of the total error (systematic + random) in 15 N R 2 values measured for the HIV protease. The advantage of this empirical error estimate is that it is applicable even when some of the factors that contribute to the off-resonance error are not known. These errors are incorporated into a χ 2 minimization protocol, in which the Carver-Richards equation is used fit the observed R 2 dispersion profiles, that yields optimized chemical exchange parameters and their confidence limits. Optimized parameters are also derived, using the same protein sample and data-fitting protocol, from 1 H R 2 measurements in which systematic errors are negligible. Although 1 H and 15 N relaxation profiles of individual residues were well fit, the optimized exchange parameters had large uncertainties (confidence limits). In contrast, when a single pair of exchange parameters (the exchange lifetime, τ ex , and the fractional population, p a ), were constrained to globally fit all R 2 profiles for residues in the dimer interface of the protein, confidence limits were less than 8% for all optimized exchange parameters. In addition, F-tests showed that quality of the fits obtained using τ ex , p a as global parameters were not improved when these parameters were free to fit the R

  20. Social aspects of clinical errors.

    Science.gov (United States)

    Richman, Joel; Mason, Tom; Mason-Whitehead, Elizabeth; McIntosh, Annette; Mercer, Dave

    2009-08-01

    Clinical errors, whether committed by doctors, nurses or other professions allied to healthcare, remain a sensitive issue requiring open debate and policy formulation in order to reduce them. The literature suggests that the issues underpinning errors made by healthcare professionals involve concerns about patient safety, professional disclosure, apology, litigation, compensation, processes of recording and policy development to enhance quality service. Anecdotally, we are aware of narratives of minor errors, which may well have been covered up and remain officially undisclosed whilst the major errors resulting in damage and death to patients alarm both professionals and public with resultant litigation and compensation. This paper attempts to unravel some of these issues by highlighting the historical nature of clinical errors and drawing parallels to contemporary times by outlining the 'compensation culture'. We then provide an overview of what constitutes a clinical error and review the healthcare professional strategies for managing such errors.

  1. Structured Head and Neck CT Angiography Reporting Reduces Resident Revision Rates.

    Science.gov (United States)

    Johnson, Tucker F; Brinjikji, Waleed; Doolittle, Derrick A; Nagelschneider, Alex A; Welch, Brian T; Kotsenas, Amy L

    2018-04-12

    This resident-driven quality improvement project was undertaken to assess the effectiveness of structured reporting to reduce revision rates for afterhours reports dictated by residents. The first part of the study assessed baseline revision rates for head and neck CT angiography (CTA) examinations dictated by residents during afterhours call. A structured report was subsequently created based on templates on the RSNA informatics reporting website and critical findings that should be assessed for on all CTA examinations. The template was made available to residents through the speech recognition software for all head and neck CTA examinations for a duration of 2 months. Report revision rates were then compared with and without use of the structured template. The structured template was found to reduce revision rates by approximately 50% with 10/41 unstructured reports revised and 2/17 structured reports revised. We believe that structured reporting can help reduce reporting errors, particularly in term of typographical errors, train residents to evaluate complex examinations in a systematic fashion, and assist them in recalling critical findings on these examinations. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. Learner Corpora without Error Tagging

    Directory of Open Access Journals (Sweden)

    Rastelli, Stefano

    2009-01-01

    Full Text Available The article explores the possibility of adopting a form-to-function perspective when annotating learner corpora in order to get deeper insights about systematic features of interlanguage. A split between forms and functions (or categories is desirable in order to avoid the "comparative fallacy" and because – especially in basic varieties – forms may precede functions (e.g., what resembles to a "noun" might have a different function or a function may show up in unexpected forms. In the computer-aided error analysis tradition, all items produced by learners are traced to a grid of error tags which is based on the categories of the target language. Differently, we believe it is possible to record and make retrievable both words and sequence of characters independently from their functional-grammatical label in the target language. For this purpose at the University of Pavia we adapted a probabilistic POS tagger designed for L1 on L2 data. Despite the criticism that this operation can raise, we found that it is better to work with "virtual categories" rather than with errors. The article outlines the theoretical background of the project and shows some examples in which some potential of SLA-oriented (non error-based tagging will be possibly made clearer.

  3. Benchmark test cases for evaluation of computer-based methods for detection of setup errors: realistic digitally reconstructed electronic portal images with known setup errors

    International Nuclear Information System (INIS)

    Fritsch, Daniel S.; Raghavan, Suraj; Boxwala, Aziz; Earnhart, Jon; Tracton, Gregg; Cullip, Timothy; Chaney, Edward L.

    1997-01-01

    Purpose: The purpose of this investigation was to develop methods and software for computing realistic digitally reconstructed electronic portal images with known setup errors for use as benchmark test cases for evaluation and intercomparison of computer-based methods for image matching and detecting setup errors in electronic portal images. Methods and Materials: An existing software tool for computing digitally reconstructed radiographs was modified to compute simulated megavoltage images. An interface was added to allow the user to specify which setup parameter(s) will contain computer-induced random and systematic errors in a reference beam created during virtual simulation. Other software features include options for adding random and structured noise, Gaussian blurring to simulate geometric unsharpness, histogram matching with a 'typical' electronic portal image, specifying individual preferences for the appearance of the 'gold standard' image, and specifying the number of images generated. The visible male computed tomography data set from the National Library of Medicine was used as the planning image. Results: Digitally reconstructed electronic portal images with known setup errors have been generated and used to evaluate our methods for automatic image matching and error detection. Any number of different sets of test cases can be generated to investigate setup errors involving selected setup parameters and anatomic volumes. This approach has proved to be invaluable for determination of error detection sensitivity under ideal (rigid body) conditions and for guiding further development of image matching and error detection methods. Example images have been successfully exported for similar use at other sites. Conclusions: Because absolute truth is known, digitally reconstructed electronic portal images with known setup errors are well suited for evaluation of computer-aided image matching and error detection methods. High-quality planning images, such as

  4. A new systematic calibration method of ring laser gyroscope inertial navigation system

    Science.gov (United States)

    Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Xiong, Zhenyu; Long, Xingwu

    2016-10-01

    Inertial navigation system has been the core component of both military and civil navigation systems. Before the INS is put into application, it is supposed to be calibrated in the laboratory in order to compensate repeatability error caused by manufacturing. Discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed theories of error inspiration and separation in detail and presented a new systematic calibration method for ring laser gyroscope inertial navigation system. Error models and equations of calibrated Inertial Measurement Unit are given. Then proper rotation arrangement orders are depicted in order to establish the linear relationships between the change of velocity errors and calibrated parameter errors. Experiments have been set up to compare the systematic errors calculated by filtering calibration result with those obtained by discrete calibration result. The largest position error and velocity error of filtering calibration result are only 0.18 miles and 0.26m/s compared with 2 miles and 1.46m/s of discrete calibration result. These results have validated the new systematic calibration method and proved its importance for optimal design and accuracy improvement of calibration of mechanically dithered ring laser gyroscope inertial navigation system.

  5. MODELING OF MANUFACTURING ERRORS FOR PIN-GEAR ELEMENTS OF PLANETARY GEARBOX

    Directory of Open Access Journals (Sweden)

    Ivan M. Egorov

    2014-11-01

    Full Text Available Theoretical background for calculation of k-h-v type cycloid reducers was developed relatively long ago. However, recently the matters of cycloid reducer design again attracted heightened attention. The reason for that is that such devices are used in many complex engineering systems, particularly, in mechatronic and robotics systems. The development of advanced technological capabilities for manufacturing of such reducers today gives the possibility for implementation of essential features of such devices: high efficiency, high gear ratio, kinematic accuracy and smooth motion. The presence of an adequate mathematical model gives the possibility for adjusting kinematic accuracy of the reducer by rational selection of manufacturing tolerances for its parts. This makes it possible to automate the design process for cycloid reducers with account of various factors including technological ones. A mathematical model and mathematical technique have been developed giving the possibility for modeling the kinematic error of the reducer with account of multiple factors, including manufacturing errors. The errors are considered in the way convenient for prediction of kinematic accuracy early at the manufacturing stage according to the results of reducer parts measurement on coordinate measuring machines. During the modeling, the wheel manufacturing errors are determined by the eccentricity and radius deviation of the pin tooth centers circle, and the deviation between the pin tooth axes positions and the centers circle. The satellite manufacturing errors are determined by the satellite eccentricity deviation and the satellite rim eccentricity. Due to the collinearity, the pin tooth and pin tooth hole diameter errors and the satellite tooth profile errors for a designated contact point are integrated into one deviation. Software implementation of the model makes it possible to estimate the pointed errors influence on satellite rotation angle error and

  6. Accuracy Improvement of Multi-Axis Systems Based on Laser Correction of Volumetric Geometric Errors

    Science.gov (United States)

    Teleshevsky, V. I.; Sokolov, V. A.; Pimushkin, Ya I.

    2018-04-01

    The article describes a volumetric geometric errors correction method for CNC- controlled multi-axis systems (machine-tools, CMMs etc.). The Kalman’s concept of “Control and Observation” is used. A versatile multi-function laser interferometer is used as Observer in order to measure machine’s error functions. A systematic error map of machine’s workspace is produced based on error functions measurements. The error map results into error correction strategy. The article proposes a new method of error correction strategy forming. The method is based on error distribution within machine’s workspace and a CNC-program postprocessor. The postprocessor provides minimal error values within maximal workspace zone. The results are confirmed by error correction of precision CNC machine-tools.

  7. Interventions to Reduce Distress in Adult Victims of Rape and Sexual Violence: A Systematic Review

    Science.gov (United States)

    Regehr, Cheryl; Alaggia, Ramona; Dennis, Jane; Pitts, Annabel; Saini, Michael

    2013-01-01

    Objectives: This article presents a systematic evaluation of the effectiveness of interventions aimed at reducing distress in adult victims of rape and sexual violence. Method: Studies were eligible for the review if the assignment of study participants to experimental or control groups was by random allocation or parallel cohort design. Results:…

  8. Instrumental systematics and weak gravitational lensing

    International Nuclear Information System (INIS)

    Mandelbaum, R.

    2015-01-01

    We present a pedagogical review of the weak gravitational lensing measurement process and its connection to major scientific questions such as dark matter and dark energy. Then we describe common ways of parametrizing systematic errors and understanding how they affect weak lensing measurements. Finally, we discuss several instrumental systematics and how they fit into this context, and conclude with some future perspective on how progress can be made in understanding the impact of instrumental systematics on weak lensing measurements

  9. Hand hygiene to reduce community transmission of influenza and acute respiratory tract infection: a systematic review.

    OpenAIRE

    Warren-Gash, C; Fragaszy, E; Hayward, AC

    2012-01-01

    : Please cite this paper as: Warren-Gash et al. (2012) Hand hygiene to reduce community transmission of influenza and acute respiratory tract infection: a systematic review. Influenza and Other Respiratory Viruses DOI: 10.1111/irv.12015. Hand hygiene may be associated with modest protection against some acute respiratory tract infections, but its specific role in influenza transmission in different settings is unclear. We aimed to review evidence that improving hand hygiene reduces primary an...

  10. Undesirable effects of covariance matrix techniques for error analysis

    International Nuclear Information System (INIS)

    Seibert, D.

    1994-01-01

    Regression with χ 2 constructed from covariance matrices should not be used for some combinations of covariance matrices and fitting functions. Using the technique for unsuitable combinations can amplify systematic errors. This amplification is uncontrolled, and can produce arbitrarily inaccurate results that might not be ruled out by a χ 2 test. In addition, this technique can give incorrect (artificially small) errors for fit parameters. I give a test for this instability and a more robust (but computationally more intensive) method for fitting correlated data

  11. On the errors on Omega(0): Monte Carlo simulations of the EMSS cluster sample

    DEFF Research Database (Denmark)

    Oukbir, J.; Arnaud, M.

    2001-01-01

    We perform Monte Carlo simulations of synthetic EMSS cluster samples, to quantify the systematic errors and the statistical uncertainties on the estimate of Omega (0) derived from fits to the cluster number density evolution and to the X-ray temperature distribution up to z=0.83. We identify...... the scatter around the relation between cluster X-ray luminosity and temperature to be a source of systematic error, of the order of Delta (syst)Omega (0) = 0.09, if not properly taken into account in the modelling. After correcting for this bias, our best Omega (0) is 0.66. The uncertainties on the shape...

  12. Error reduction and parameter optimization of the TAPIR method for fast T1 mapping.

    Science.gov (United States)

    Zaitsev, M; Steinhoff, S; Shah, N J

    2003-06-01

    A methodology is presented for the reduction of both systematic and random errors in T(1) determination using TAPIR, a Look-Locker-based fast T(1) mapping technique. The relations between various sequence parameters were carefully investigated in order to develop recipes for choosing optimal sequence parameters. Theoretical predictions for the optimal flip angle were verified experimentally. Inversion pulse imperfections were identified as the main source of systematic errors in T(1) determination with TAPIR. An effective remedy is demonstrated which includes extension of the measurement protocol to include a special sequence for mapping the inversion efficiency itself. Copyright 2003 Wiley-Liss, Inc.

  13. Understanding errors in EIA projections of energy demand

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, Carolyn; Herrnstadt, Evan; Morgenstern, Richard [Resources for the Future, 1616 P St. NW, Washington, DC 20036 (United States)

    2009-08-15

    This paper investigates the potential for systematic errors in the Energy Information Administration's (EIA) widely used Annual Energy Outlook, focusing on the near- to mid-term projections of energy demand. Based on analysis of the EIA's 22-year projection record, we find a fairly modest but persistent tendency to underestimate total energy demand by an average of 2 percent per year after controlling for projection errors in gross domestic product, oil prices, and heating/cooling degree days. For 14 individual fuels/consuming sectors routinely reported by the EIA, we observe a great deal of directional consistency in the errors over time, ranging up to 7 percent per year. Electric utility renewables, electric utility natural gas, transportation distillate, and residential electricity show significant biases on average. Projections for certain other sectors have significant unexplained errors for selected time horizons. Such independent evaluation can be useful for validating analytic efforts and for prioritizing future model revisions. (author)

  14. Improved Landau gauge fixing and discretisation errors

    International Nuclear Information System (INIS)

    Bonnet, F.D.R.; Bowman, P.O.; Leinweber, D.B.; Richards, D.G.; Williams, A.G.

    2000-01-01

    Lattice discretisation errors in the Landau gauge condition are examined. An improved gauge fixing algorithm in which O(a 2 ) errors are removed is presented. O(a 2 ) improvement of the gauge fixing condition displays the secondary benefit of reducing the size of higher-order errors. These results emphasise the importance of implementing an improved gauge fixing condition

  15. Systematic Review of Mindfulness Practice for Reducing Job Burnout.

    Science.gov (United States)

    Luken, Michelle; Sammons, Amanda

    2016-01-01

    A systematic search and critical appraisal of interdisciplinary literature was conducted to evaluate the evidence for practicing mindfulness to treat job burnout and to explore implications for occupational therapy practitioners. Eight articles met inclusion criteria. Each study was assessed for quality using the Physiotherapy Evidence Database scale. We used the U.S. Agency for Health Care Policy and Research guidelines to determine strength of evidence. Of the studies reviewed, participants included health care professionals and teachers; no studies included occupational therapy practitioners. Six of the 8 studies demonstrated statistically significant decreases in job burnout after mindfulness training. Seven of the studies were of fair to good quality. There is strong evidence for the use of mindfulness practice to reduce job burnout among health care professionals and teachers. Research is needed to fill the gap on whether mindfulness is effective for treating burnout in occupational therapy practitioners. Copyright © 2016 by the American Occupational Therapy Association, Inc.

  16. Systematic errors in the determination of the spectroscopic g-factor in broadband ferromagnetic resonance spectroscopy: A proposed solution

    Science.gov (United States)

    Gonzalez-Fuentes, C.; Dumas, R. K.; García, C.

    2018-01-01

    A theoretical and experimental study of the influence of small offsets of the magnetic field (δH) on the measurement accuracy of the spectroscopic g-factor (g) and saturation magnetization (Ms) obtained by broadband ferromagnetic resonance (FMR) measurements is presented. The random nature of δH generates systematic and opposite sign deviations of the values of g and Ms with respect to their true values. A δH on the order of a few Oe leads to a ˜10% error of g and Ms for a typical range of frequencies employed in broadband FMR experiments. We propose a simple experimental methodology to significantly minimize the effect of δH on the fitted values of g and Ms, eliminating their apparent dependence in the range of frequencies employed. Our method was successfully tested using broadband FMR measurements on a 5 nm thick Ni80Fe20 film for frequencies ranging between 3 and 17 GHz.

  17. Reducing Monte Carlo error in the Bayesian estimation of risk ratios using log-binomial regression models.

    Science.gov (United States)

    Salmerón, Diego; Cano, Juan A; Chirlaque, María D

    2015-08-30

    In cohort studies, binary outcomes are very often analyzed by logistic regression. However, it is well known that when the goal is to estimate a risk ratio, the logistic regression is inappropriate if the outcome is common. In these cases, a log-binomial regression model is preferable. On the other hand, the estimation of the regression coefficients of the log-binomial model is difficult owing to the constraints that must be imposed on these coefficients. Bayesian methods allow a straightforward approach for log-binomial regression models and produce smaller mean squared errors in the estimation of risk ratios than the frequentist methods, and the posterior inferences can be obtained using the software WinBUGS. However, Markov chain Monte Carlo methods implemented in WinBUGS can lead to large Monte Carlo errors in the approximations to the posterior inferences because they produce correlated simulations, and the accuracy of the approximations are inversely related to this correlation. To reduce correlation and to improve accuracy, we propose a reparameterization based on a Poisson model and a sampling algorithm coded in R. Copyright © 2015 John Wiley & Sons, Ltd.

  18. Physician Preferences to Communicate Neuropsychological Results: Comparison of Qualitative Descriptors and a Proposal to Reduce Communication Errors.

    Science.gov (United States)

    Schoenberg, Mike R; Osborn, Katie E; Mahone, E Mark; Feigon, Maia; Roth, Robert M; Pliskin, Neil H

    2017-11-08

    Errors in communication are a leading cause of medical errors. A potential source of error in communicating neuropsychological results is confusion in the qualitative descriptors used to describe standardized neuropsychological data. This study sought to evaluate the extent to which medical consumers of neuropsychological assessments believed that results/findings were not clearly communicated. In addition, preference data for a variety of qualitative descriptors commonly used to communicate normative neuropsychological test scores were obtained. Preference data were obtained for five qualitative descriptor systems as part of a larger 36-item internet-based survey of physician satisfaction with neuropsychological services. A new qualitative descriptor system termed the Simplified Qualitative Classification System (Q-Simple) was proposed to reduce the potential for communication errors using seven terms: very superior, superior, high average, average, low average, borderline, and abnormal/impaired. A non-random convenience sample of 605 clinicians identified from four United States academic medical centers from January 1, 2015 through January 7, 2016 were invited to participate. A total of 182 surveys were completed. A minority of clinicians (12.5%) indicated that neuropsychological study results were not clearly communicated. When communicating neuropsychological standardized scores, the two most preferred qualitative descriptor systems were by Heaton and colleagues (26%) and a newly proposed Q-simple system (22%). Comprehensive norms for an extended Halstead-Reitan battery: Demographic corrections, research findings, and clinical applications. Odessa, TX: Psychological Assessment Resources) (26%) and the newly proposed Q-Simple system (22%). Initial findings highlight the need to improve and standardize communication of neuropsychological results. These data offer initial guidance for preferred terms to communicate test results and form a foundation for more

  19. Irregular analytical errors in diagnostic testing - a novel concept.

    Science.gov (United States)

    Vogeser, Michael; Seger, Christoph

    2018-02-23

    In laboratory medicine, routine periodic analyses for internal and external quality control measurements interpreted by statistical methods are mandatory for batch clearance. Data analysis of these process-oriented measurements allows for insight into random analytical variation and systematic calibration bias over time. However, in such a setting, any individual sample is not under individual quality control. The quality control measurements act only at the batch level. Quantitative or qualitative data derived for many effects and interferences associated with an individual diagnostic sample can compromise any analyte. It is obvious that a process for a quality-control-sample-based approach of quality assurance is not sensitive to such errors. To address the potential causes and nature of such analytical interference in individual samples more systematically, we suggest the introduction of a new term called the irregular (individual) analytical error. Practically, this term can be applied in any analytical assay that is traceable to a reference measurement system. For an individual sample an irregular analytical error is defined as an inaccuracy (which is the deviation from a reference measurement procedure result) of a test result that is so high it cannot be explained by measurement uncertainty of the utilized routine assay operating within the accepted limitations of the associated process quality control measurements. The deviation can be defined as the linear combination of the process measurement uncertainty and the method bias for the reference measurement system. Such errors should be coined irregular analytical errors of the individual sample. The measurement result is compromised either by an irregular effect associated with the individual composition (matrix) of the sample or an individual single sample associated processing error in the analytical process. Currently, the availability of reference measurement procedures is still highly limited, but LC

  20. Dosimetric implications of inter- and intrafractional prostate positioning errors during tomotherapy : Comparison of gold marker-based registrations with native MVCT.

    Science.gov (United States)

    Wust, Peter; Joswig, Marc; Graf, Reinhold; Böhmer, Dirk; Beck, Marcus; Barelkowski, Thomasz; Budach, Volker; Ghadjar, Pirus

    2017-09-01

    For high-dose radiation therapy (RT) of prostate cancer, image-guided (IGRT) and intensity-modulated RT (IMRT) approaches are standard. Less is known regarding comparisons of different IGRT techniques and the resulting residual errors, as well as regarding their influences on dose distributions. A total of 58 patients who received tomotherapy-based RT up to 84 Gy for high-risk prostate cancer underwent IGRT based either on daily megavoltage CT (MVCT) alone (n = 43) or the additional use of gold markers (n = 15) under routine conditions. Planned Adaptive (Accuray Inc., Madison, WI, USA) software was used for elaborated offline analysis to quantify residual interfractional prostate positioning errors, along with systematic and random errors and the resulting safety margins after both IGRT approaches. Dosimetric parameters for clinical target volume (CTV) coverage and exposition of organs at risk (OAR) were also analyzed and compared. Interfractional as well as intrafractional displacements were determined. Particularly in the vertical direction, residual interfractional positioning errors were reduced using the gold marker-based approach, but dosimetric differences were moderate and the clinical relevance relatively small. Intrafractional prostate motion proved to be quite high, with displacements of 1-3 mm; however, these did not result in additional dosimetric impairments. Residual interfractional positioning errors were reduced using gold marker-based IGRT; however, this resulted in only slightly different final dose distributions. Therefore, daily MVCT-based IGRT without markers might be a valid alternative.

  1. Outcomes of a Failure Mode and Effects Analysis for medication errors in pediatric anesthesia.

    Science.gov (United States)

    Martin, Lizabeth D; Grigg, Eliot B; Verma, Shilpa; Latham, Gregory J; Rampersad, Sally E; Martin, Lynn D

    2017-06-01

    The Institute of Medicine has called for development of strategies to prevent medication errors, which are one important cause of preventable harm. Although the field of anesthesiology is considered a leader in patient safety, recent data suggest high medication error rates in anesthesia practice. Unfortunately, few error prevention strategies for anesthesia providers have been implemented. Using Toyota Production System quality improvement methodology, a multidisciplinary team observed 133 h of medication practice in the operating room at a tertiary care freestanding children's hospital. A failure mode and effects analysis was conducted to systematically deconstruct and evaluate each medication handling process step and score possible failure modes to quantify areas of risk. A bundle of five targeted countermeasures were identified and implemented over 12 months. Improvements in syringe labeling (73 to 96%), standardization of medication organization in the anesthesia workspace (0 to 100%), and two-provider infusion checks (23 to 59%) were observed. Medication error reporting improved during the project and was subsequently maintained. After intervention, the median medication error rate decreased from 1.56 to 0.95 per 1000 anesthetics. The frequency of medication error harm events reaching the patient also decreased. Systematic evaluation and standardization of medication handling processes by anesthesia providers in the operating room can decrease medication errors and improve patient safety. © 2017 John Wiley & Sons Ltd.

  2. Orbit error characteristic and distribution of TLE using CHAMP orbit data

    Science.gov (United States)

    Xu, Xiao-li; Xiong, Yong-qing

    2018-02-01

    Space object orbital covariance data is required for collision risk assessments, but publicly accessible two line element (TLE) data does not provide orbital error information. This paper compared historical TLE data and GPS precision ephemerides of CHAMP to assess TLE orbit accuracy from 2002 to 2008, inclusive. TLE error spatial variations with longitude and latitude were calculated to analyze error characteristics and distribution. The results indicate that TLE orbit data are systematically biased from the limited SGP4 model. The biases can reach the level of kilometers, and the sign and magnitude are correlate significantly with longitude.

  3. Which non-technical skills do junior doctors require to prescribe safely? A systematic review.

    Science.gov (United States)

    Dearden, Effie; Mellanby, Edward; Cameron, Helen; Harden, Jeni

    2015-12-01

    Prescribing errors are a major source of avoidable morbidity and mortality. Junior doctors write most in-hospital prescriptions and are the least experienced members of the healthcare team. This puts them at high risk of error and makes them attractive targets for interventions to improve prescription safety. Error analysis has shown a background of complex environments with multiple contributory conditions. Similar conditions in other high risk industries, such as aviation, have led to an increased understanding of so-called human factors and the use of non-technical skills (NTS) training to try to reduce error. To date no research has examined the NTS required for safe prescribing. The aim of this review was to develop a prototype NTS taxonomy for safe prescribing, by junior doctors, in hospital settings. A systematic search identified 14 studies analyzing prescribing behaviours and errors by junior doctors. Framework analysis was used to extract data from the studies and identify behaviours related to categories of NTS that might be relevant to safe and effective prescribing performance by junior doctors. Categories were derived from existing literature and inductively from the data. A prototype taxonomy of relevant categories (situational awareness, decision making, communication and team working, and task management) and elements was constructed. This prototype will form the basis of future work to create a tool that can be used for training and assessment of medical students and junior doctors to reduce prescribing error in the future. © 2015 The British Pharmacological Society.

  4. Scale interactions on diurnal toseasonal timescales and their relevanceto model systematic errors

    Directory of Open Access Journals (Sweden)

    G. Yang

    2003-06-01

    Full Text Available Examples of current research into systematic errors in climate models are used to demonstrate the importance of scale interactions on diurnal,intraseasonal and seasonal timescales for the mean and variability of the tropical climate system. It has enabled some conclusions to be drawn about possible processes that may need to be represented, and some recommendations to be made regarding model improvements. It has been shown that the Maritime Continent heat source is a major driver of the global circulation but yet is poorly represented in GCMs. A new climatology of the diurnal cycle has been used to provide compelling evidence of important land-sea breeze and gravity wave effects, which may play a crucial role in the heat and moisture budget of this key region for the tropical and global circulation. The role of the diurnal cycle has also been emphasized for intraseasonal variability associated with the Madden Julian Oscillation (MJO. It is suggested that the diurnal cycle in Sea Surface Temperature (SST during the suppressed phase of the MJO leads to a triggering of cumulus congestus clouds, which serve to moisten the free troposphere and hence precondition the atmosphere for the next active phase. It has been further shown that coupling between the ocean and atmosphere on intraseasonal timescales leads to a more realistic simulation of the MJO. These results stress the need for models to be able to simulate firstly, the observed tri-modal distribution of convection, and secondly, the coupling between the ocean and atmosphere on diurnal to intraseasonal timescales. It is argued, however, that the current representation of the ocean mixed layer in coupled models is not adequate to represent the complex structure of the observed mixed layer, in particular the formation of salinity barrier layers which can potentially provide much stronger local coupling between the atmosphere and ocean on diurnal to intraseasonal timescales.

  5. Demonstration Integrated Knowledge-Based System for Estimating Human Error Probabilities

    Energy Technology Data Exchange (ETDEWEB)

    Auflick, Jack L.

    1999-04-21

    Human Reliability Analysis (HRA) is currently comprised of at least 40 different methods that are used to analyze, predict, and evaluate human performance in probabilistic terms. Systematic HRAs allow analysts to examine human-machine relationships, identify error-likely situations, and provide estimates of relative frequencies for human errors on critical tasks, highlighting the most beneficial areas for system improvements. Unfortunately, each of HRA's methods has a different philosophical approach, thereby producing estimates of human error probabilities (HEPs) that area better or worse match to the error likely situation of interest. Poor selection of methodology, or the improper application of techniques can produce invalid HEP estimates, where that erroneous estimation of potential human failure could have potentially severe consequences in terms of the estimated occurrence of injury, death, and/or property damage.

  6. Effect of Professional Ethics on Reducing Medical Errors from the Viewpoint of Faculty Members in Medical School of Tabriz University of Medical Sciences

    Directory of Open Access Journals (Sweden)

    Fatemeh Donboli Miandoab

    2017-12-01

    Full Text Available Background: Professionalism and adherence to ethics and professional standards are among the most important topics in medical ethics that can play a role in reducing medical errors. This paper examines and evaluates the effect of professional ethics on reducing medical errors from the viewpoint of faculty members in the medical school of the Tabriz University of Medical Sciences. Methods: in this cross-sectional descriptive study, faculty members of the Tabriz University of Medical Sciences were the statistical population from whom 105 participants were randomly selected through simple random sampling. A questionnaire was used, to examine and compare the self-assessed opinions of faculty members in the internal, surgical, pediatric, gynecological, and psychiatric departments. The questionnaires were completed by a self-assessment method and the collected data was analyzed using SPSS 21. Results: Based on physicians’ opinions, professional ethical considerations and its three domains and aspects have a significant role in reducing medical errors and crimes. The mean scores (standard deviations of the managerial, knowledge and communication skills and environmental variables were respectively 46.7 (5.64, 64.6 (8.14 and 16.2 (2.97 from the physicians’ viewpoints. The significant factors with highest scores on the reduction of medical errors and crimes in all three domains were as follows: in the managerial skills variable, trust, physician’s sense of responsibility against the patient and his/her respect for patients’ rights; in the knowledge and communication skills domain, general competence and eligibility as a physician and examination and diagnosis skills; and, last, in the environmental domain, the sufficiency of trainings in ethical issues during education and their satisfaction with basic needs. Conclusion: Based on the findings of this research, attention to the improvement of communication, management and environment skills should

  7. Human Error and the International Space Station: Challenges and Triumphs in Science Operations

    Science.gov (United States)

    Harris, Samantha S.; Simpson, Beau C.

    2016-01-01

    Any system with a human component is inherently risky. Studies in human factors and psychology have repeatedly shown that human operators will inevitably make errors, regardless of how well they are trained. Onboard the International Space Station (ISS) where crew time is arguably the most valuable resource, errors by the crew or ground operators can be costly to critical science objectives. Operations experts at the ISS Payload Operations Integration Center (POIC), located at NASA's Marshall Space Flight Center in Huntsville, Alabama, have learned that from payload concept development through execution, there are countless opportunities to introduce errors that can potentially result in costly losses of crew time and science. To effectively address this challenge, we must approach the design, testing, and operation processes with two specific goals in mind. First, a systematic approach to error and human centered design methodology should be implemented to minimize opportunities for user error. Second, we must assume that human errors will be made and enable rapid identification and recoverability when they occur. While a systematic approach and human centered development process can go a long way toward eliminating error, the complete exclusion of operator error is not a reasonable expectation. The ISS environment in particular poses challenging conditions, especially for flight controllers and astronauts. Operating a scientific laboratory 250 miles above the Earth is a complicated and dangerous task with high stakes and a steep learning curve. While human error is a reality that may never be fully eliminated, smart implementation of carefully chosen tools and techniques can go a long way toward minimizing risk and increasing the efficiency of NASA's space science operations.

  8. Predicting Statistical Response and Extreme Events in Uncertainty Quantification through Reduced-Order Models

    Science.gov (United States)

    Qi, D.; Majda, A.

    2017-12-01

    A low-dimensional reduced-order statistical closure model is developed for quantifying the uncertainty in statistical sensitivity and intermittency in principal model directions with largest variability in high-dimensional turbulent system and turbulent transport models. Imperfect model sensitivity is improved through a recent mathematical strategy for calibrating model errors in a training phase, where information theory and linear statistical response theory are combined in a systematic fashion to achieve the optimal model performance. The idea in the reduced-order method is from a self-consistent mathematical framework for general systems with quadratic nonlinearity, where crucial high-order statistics are approximated by a systematic model calibration procedure. Model efficiency is improved through additional damping and noise corrections to replace the expensive energy-conserving nonlinear interactions. Model errors due to the imperfect nonlinear approximation are corrected by tuning the model parameters using linear response theory with an information metric in a training phase before prediction. A statistical energy principle is adopted to introduce a global scaling factor in characterizing the higher-order moments in a consistent way to improve model sensitivity. Stringent models of barotropic and baroclinic turbulence are used to display the feasibility of the reduced-order methods. Principal statistical responses in mean and variance can be captured by the reduced-order models with accuracy and efficiency. Besides, the reduced-order models are also used to capture crucial passive tracer field that is advected by the baroclinic turbulent flow. It is demonstrated that crucial principal statistical quantities like the tracer spectrum and fat-tails in the tracer probability density functions in the most important large scales can be captured efficiently with accuracy using the reduced-order tracer model in various dynamical regimes of the flow field with

  9. Research trend on human error reduction

    International Nuclear Information System (INIS)

    Miyaoka, Sadaoki

    1990-01-01

    Human error has been the problem in all industries. In 1988, the Bureau of Mines, Department of the Interior, USA, carried out the worldwide survey on the human error in all industries in relation to the fatal accidents in mines. There was difference in the results according to the methods of collecting data, but the proportion that human error took in the total accidents distributed in the wide range of 20∼85%, and was 35% on the average. The rate of occurrence of accidents and troubles in Japanese nuclear power stations is shown, and the rate of occurrence of human error is 0∼0.5 cases/reactor-year, which did not much vary. Therefore, the proportion that human error took in the total tended to increase, and it has become important to reduce human error for lowering the rate of occurrence of accidents and troubles hereafter. After the TMI accident in 1979 in USA, the research on man-machine interface became active, and after the Chernobyl accident in 1986 in USSR, the problem of organization and management has been studied. In Japan, 'Safety 21' was drawn up by the Advisory Committee for Energy, and also the annual reports on nuclear safety pointed out the importance of human factors. The state of the research on human factors in Japan and abroad and three targets to reduce human error are reported. (K.I.)

  10. Human error in strabismus surgery: Quantification with a sensitivity analysis

    NARCIS (Netherlands)

    S. Schutte (Sander); J.R. Polling (Jan Roelof); F.C.T. van der Helm (Frans); H.J. Simonsz (Huib)

    2009-01-01

    textabstractBackground: Reoperations are frequently necessary in strabismus surgery. The goal of this study was to analyze human-error related factors that introduce variability in the results of strabismus surgery in a systematic fashion. Methods: We identified the primary factors that influence

  11. Medication Errors: New EU Good Practice Guide on Risk Minimisation and Error Prevention.

    Science.gov (United States)

    Goedecke, Thomas; Ord, Kathryn; Newbould, Victoria; Brosch, Sabine; Arlett, Peter

    2016-06-01

    A medication error is an unintended failure in the drug treatment process that leads to, or has the potential to lead to, harm to the patient. Reducing the risk of medication errors is a shared responsibility between patients, healthcare professionals, regulators and the pharmaceutical industry at all levels of healthcare delivery. In 2015, the EU regulatory network released a two-part good practice guide on medication errors to support both the pharmaceutical industry and regulators in the implementation of the changes introduced with the EU pharmacovigilance legislation. These changes included a modification of the 'adverse reaction' definition to include events associated with medication errors, and the requirement for national competent authorities responsible for pharmacovigilance in EU Member States to collaborate and exchange information on medication errors resulting in harm with national patient safety organisations. To facilitate reporting and learning from medication errors, a clear distinction has been made in the guidance between medication errors resulting in adverse reactions, medication errors without harm, intercepted medication errors and potential errors. This distinction is supported by an enhanced MedDRA(®) terminology that allows for coding all stages of the medication use process where the error occurred in addition to any clinical consequences. To better understand the causes and contributing factors, individual case safety reports involving an error should be followed-up with the primary reporter to gather information relevant for the conduct of root cause analysis where this may be appropriate. Such reports should also be summarised in periodic safety update reports and addressed in risk management plans. Any risk minimisation and prevention strategy for medication errors should consider all stages of a medicinal product's life-cycle, particularly the main sources and types of medication errors during product development. This article

  12. Optics measurement algorithms and error analysis for the proton energy frontier

    Directory of Open Access Journals (Sweden)

    A. Langner

    2015-03-01

    Full Text Available Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV was insufficient to understand beam size measurements and determine interaction point (IP β-functions (β^{*}. A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased the average error bars by a factor of three to four. This allowed the calculation of β^{*} values and demonstrated to be fundamental in the understanding of emittance evolution during the energy ramp.

  13. Optics measurement algorithms and error analysis for the proton energy frontier

    Science.gov (United States)

    Langner, A.; Tomás, R.

    2015-03-01

    Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV) was insufficient to understand beam size measurements and determine interaction point (IP) β -functions (β*). A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased the average error bars by a factor of three to four. This allowed the calculation of β* values and demonstrated to be fundamental in the understanding of emittance evolution during the energy ramp.

  14. The effect of experimental sleep fragmentation on error monitoring.

    Science.gov (United States)

    Ko, Cheng-Hung; Fang, Ya-Wen; Tsai, Ling-Ling; Hsieh, Shulan

    2015-01-01

    Experimental sleep fragmentation (SF) is characterized by frequent brief arousals without reduced total sleep time and causes daytime sleepiness and impaired neurocognitive processes. This study explored the impact of SF on error monitoring. Thirteen adults underwent auditory stimuli-induced high-level (H) and low-level (L) SF nights. Flanker task performance and electroencephalogram data were collected in the morning following SF nights. Compared to LSF, HSF induced more arousals and stage N1 sleep, decreased slow wave sleep and rapid-eye-movement sleep (REMS), decreased subjective sleep quality, increased daytime sleepiness, and decreased amplitudes of P300 and error-related positivity (Pe). SF effects on N1 sleep were negatively correlated with SF effects on the Pe amplitude. Furthermore, as REMS was reduced by SF, post-error accuracy compensations were greatly reduced. In conclusion, attentional processes and error monitoring were impaired following one night of frequent sleep disruptions, even when total sleep time was not reduced. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Errors and Correction of Precipitation Measurements in China

    Institute of Scientific and Technical Information of China (English)

    REN Zhihua; LI Mingqin

    2007-01-01

    In order to discover the range of various errors in Chinese precipitation measurements and seek a correction method, 30 precipitation evaluation stations were set up countrywide before 1993. All the stations are reference stations in China. To seek a correction method for wind-induced error, a precipitation correction instrument called the "horizontal precipitation gauge" was devised beforehand. Field intercomparison observations regarding 29,000 precipitation events have been conducted using one pit gauge, two elevated operational gauges and one horizontal gauge at the above 30 stations. The range of precipitation measurement errors in China is obtained by analysis of intercomparison measurement results. The distribution of random errors and systematic errors in precipitation measurements are studied in this paper.A correction method, especially for wind-induced errors, is developed. The results prove that a correlation of power function exists between the precipitation amount caught by the horizontal gauge and the absolute difference of observations implemented by the operational gauge and pit gauge. The correlation coefficient is 0.99. For operational observations, precipitation correction can be carried out only by parallel observation with a horizontal precipitation gauge. The precipitation accuracy after correction approaches that of the pit gauge. The correction method developed is simple and feasible.

  16. A systematic review: effectiveness of mass media campaigns for reducing alcohol-impaired driving and alcohol-related crashes.

    Science.gov (United States)

    Yadav, Rajendra-Prasad; Kobayashi, Miwako

    2015-09-04

    Mass media campaigns have long been used as a tool for promoting public health. In the past decade, the growth of social media has allowed more diverse options for mass media campaigns. This systematic review was conducted to assess newer evidence from quantitative studies on the effectiveness of mass media campaigns for reducing alcohol-impaired driving (AID) and alcohol-related crashes, particularly after the paper that Elder et al. published in 2004. This review focused on English language studies that evaluated the effect of mass media campaigns for reducing AID and alcohol-related crashes, with or without enforcement efforts. A systematic search was conducted for studies published between January 1, 2002 and December 31, 2013. Studies from the review by Elder et al. were added as well. A total of 19 studies met the inclusion criteria for the systematic review, including three studies from the review by Elder et al. Nine of them had concomitant enforcement measures and did not evaluate the impact of media campaigns independently. Studies that evaluated the impact of mass media independently showed reduction more consistently (median -15.1%, range -28.8 to 0%), whereas results of studies that had concomitant enforcement activities were more variable (median -8.6%, range -36.4 to +14.6%). Summary effects calculated from seven studies showed no evidence of media campaigns reducing the risk of alcohol-related injuries or fatalities (RR 1.00, 95% CI = 0.94 to 1.06). Despite additional decade of evidence, reviewed studies were heterogeneous in their approaches; therefore, we could not conclude that media campaigns reduced the risk of alcohol-related injuries or crashes. More studies are needed, including studies evaluating newly emerging media and cost-effectiveness of media campaigns.

  17. A Systematic Review of Behavioral Interventions to Reduce Condomless Sex and Increase HIV Testing for Latino MSM.

    Science.gov (United States)

    Pérez, Ashley; Santamaria, E Karina; Operario, Don

    2017-12-15

    Latino men who have sex with men (MSM) in the United States are disproportionately affected by HIV, and there have been calls to improve availability of culturally sensitive HIV prevention programs for this population. This article provides a systematic review of intervention programs to reduce condomless sex and/or increase HIV testing among Latino MSM. We searched four electronic databases using a systematic review protocol, screened 1777 unique records, and identified ten interventions analyzing data from 2871 Latino MSM. Four studies reported reductions in condomless anal intercourse, and one reported reductions in number of sexual partners. All studies incorporated surface structure cultural features such as bilingual study recruitment, but the incorporation of deep structure cultural features, such as machismo and sexual silence, was lacking. There is a need for rigorously designed interventions that incorporate deep structure cultural features in order to reduce HIV among Latino MSM.

  18. VOLUMETRIC ERROR COMPENSATION IN FIVE-AXIS CNC MACHINING CENTER THROUGH KINEMATICS MODELING OF GEOMETRIC ERROR

    Directory of Open Access Journals (Sweden)

    Pooyan Vahidi Pashsaki

    2016-06-01

    Full Text Available Accuracy of a five-axis CNC machine tool is affected by a vast number of error sources. This paper investigates volumetric error modeling and its compensation to the basis for creation of new tool path for improvement of work pieces accuracy. The volumetric error model of a five-axis machine tool with the configuration RTTTR (tilting head B-axis and rotary table in work piece side A΄ was set up taking into consideration rigid body kinematics and homogeneous transformation matrix, in which 43 error components are included. Volumetric error comprises 43 error components that can separately reduce geometrical and dimensional accuracy of work pieces. The machining accuracy of work piece is guaranteed due to the position of the cutting tool center point (TCP relative to the work piece. The cutting tool is deviated from its ideal position relative to the work piece and machining error is experienced. For compensation process detection of the present tool path and analysis of the RTTTR five-axis CNC machine tools geometrical error, translating current position of component to compensated positions using the Kinematics error model, converting newly created component to new tool paths using the compensation algorithms and finally editing old G-codes using G-code generator algorithm have been employed.

  19. Controlling qubit drift by recycling error correction syndromes

    Science.gov (United States)

    Blume-Kohout, Robin

    2015-03-01

    Physical qubits are susceptible to systematic drift, above and beyond the stochastic Markovian noise that motivates quantum error correction. This parameter drift must be compensated - if it is ignored, error rates will rise to intolerable levels - but compensation requires knowing the parameters' current value, which appears to require halting experimental work to recalibrate (e.g. via quantum tomography). Fortunately, this is untrue. I show how to perform on-the-fly recalibration on the physical qubits in an error correcting code, using only information from the error correction syndromes. The algorithm for detecting and compensating drift is very simple - yet, remarkably, when used to compensate Brownian drift in the qubit Hamiltonian, it achieves a stabilized error rate very close to the theoretical lower bound. Against 1/f noise, it is less effective only because 1/f noise is (like white noise) dominated by high-frequency fluctuations that are uncompensatable. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE

  20. Cognitive aspect of diagnostic errors.

    Science.gov (United States)

    Phua, Dong Haur; Tan, Nigel C K

    2013-01-01

    Diagnostic errors can result in tangible harm to patients. Despite our advances in medicine, the mental processes required to make a diagnosis exhibits shortcomings, causing diagnostic errors. Cognitive factors are found to be an important cause of diagnostic errors. With new understanding from psychology and social sciences, clinical medicine is now beginning to appreciate that our clinical reasoning can take the form of analytical reasoning or heuristics. Different factors like cognitive biases and affective influences can also impel unwary clinicians to make diagnostic errors. Various strategies have been proposed to reduce the effect of cognitive biases and affective influences when clinicians make diagnoses; however evidence for the efficacy of these methods is still sparse. This paper aims to introduce the reader to the cognitive aspect of diagnostic errors, in the hope that clinicians can use this knowledge to improve diagnostic accuracy and patient outcomes.

  1. Nuclear power plant personnel errors in decision-making as an object of probabilistic risk assessment

    International Nuclear Information System (INIS)

    Reer, B.

    1993-09-01

    The integration of human error - also called man-machine system analysis (MMSA) - is an essential part of probabilistic risk assessment (PRA). A new method is presented which allows for a systematic and comprehensive PRA inclusions of decision-based errors due to conflicts or similarities. For the error identification procedure, new question techniques are developed. These errors are shown to be identified by looking at retroactions caused by subordinate goals as components of the overall safety relevant goal. New quantification methods for estimating situation-specific probabilities are developed. The factors conflict and similarity are operationalized in a way that allows their quantification based on informations which are usually available in PRA. The quantification procedure uses extrapolations and interpolations based on a poor set of data related to decision-based errors. Moreover, for passive errors in decision-making a completely new approach is presented where errors are quantified via a delay initiating the required action rather than via error probabilities. The practicability of this dynamic approach is demonstrated by a probabilistic analysis of the actions required during the total loss of feedwater event at the Davis-Besse plant 1985. The extensions of the ''classical'' PRA method developed in this work are applied to a MMSA of the decay heat removal (DHR) of the ''HTR-500''. Errors in decision-making - as potential roots of extraneous acts - are taken into account in a comprehensive and systematic manner. Five additional errors are identified. However, the probabilistic quantification results a nonsignificant increase of the DHR failure probability. (orig.) [de

  2. Human error in strabismus surgery : Quantification with a sensitivity analysis

    NARCIS (Netherlands)

    Schutte, S.; Polling, J.R.; Van der Helm, F.C.T.; Simonsz, H.J.

    2008-01-01

    Background- Reoperations are frequently necessary in strabismus surgery. The goal of this study was to analyze human-error related factors that introduce variability in the results of strabismus surgery in a systematic fashion. Methods- We identified the primary factors that influence the outcome of

  3. Nature of the Refractive Errors in Rhesus Monkeys (Macaca mulatta) with Experimentally Induced Ametropias

    Science.gov (United States)

    Qiao-Grider, Ying; Hung, Li-Fang; Kee, Chea-su; Ramamirtham, Ramkumar; Smith, Earl L.

    2010-01-01

    We analyzed the contribution of individual ocular components to vision-induced ametropias in 210 rhesus monkeys. The primary contribution to refractive-error development came from vitreous chamber depth; a minor contribution from corneal power was also detected. However, there was no systematic relationship between refractive error and anterior chamber depth or between refractive error and any crystalline lens parameter. Our results are in good agreement with previous studies in humans, suggesting that the refractive errors commonly observed in humans are created by vision-dependent mechanisms that are similar to those operating in monkeys. This concordance emphasizes the applicability of rhesus monkeys in refractive-error studies. PMID:20600237

  4. Sequential Ensembles Tolerant to Synthetic Aperture Radar (SAR Soil Moisture Retrieval Errors

    Directory of Open Access Journals (Sweden)

    Ju Hyoung Lee

    2016-04-01

    Full Text Available Due to complicated and undefined systematic errors in satellite observation, data assimilation integrating model states with satellite observations is more complicated than field measurements-based data assimilation at a local scale. In the case of Synthetic Aperture Radar (SAR soil moisture, the systematic errors arising from uncertainties in roughness conditions are significant and unavoidable, but current satellite bias correction methods do not resolve the problems very well. Thus, apart from the bias correction process of satellite observation, it is important to assess the inherent capability of satellite data assimilation in such sub-optimal but more realistic observational error conditions. To this end, time-evolving sequential ensembles of the Ensemble Kalman Filter (EnKF is compared with stationary ensemble of the Ensemble Optimal Interpolation (EnOI scheme that does not evolve the ensembles over time. As the sensitivity analysis demonstrated that the surface roughness is more sensitive to the SAR retrievals than measurement errors, it is a scope of this study to monitor how data assimilation alters the effects of roughness on SAR soil moisture retrievals. In results, two data assimilation schemes all provided intermediate values between SAR overestimation, and model underestimation. However, under the same SAR observational error conditions, the sequential ensembles approached a calibrated model showing the lowest Root Mean Square Error (RMSE, while the stationary ensemble converged towards the SAR observations exhibiting the highest RMSE. As compared to stationary ensembles, sequential ensembles have a better tolerance to SAR retrieval errors. Such inherent nature of EnKF suggests an operational merit as a satellite data assimilation system, due to the limitation of bias correction methods currently available.

  5. Radiologic errors, past, present and future.

    Science.gov (United States)

    Berlin, Leonard

    2014-01-01

    During the 10-year period beginning in 1949 with publication of five articles in two radiology journals and UKs The Lancet, a California radiologist named L.H. Garland almost single-handedly shocked the entire medical and especially the radiologic community. He focused their attention on the fact now known and accepted by all, but at that time not previously recognized and acknowledged only with great reluctance, that a substantial degree of observer error was prevalent in radiologic interpretation. In the more than half-century that followed, Garland's pioneering work has been affirmed and reaffirmed by numerous researchers. Retrospective studies disclosed then and still disclose today that diagnostic errors in radiologic interpretations of plain radiographic (as well as CT, MR, ultrasound, and radionuclide) images hover in the 30% range, not too dissimilar to the error rates in clinical medicine. Seventy percent of these errors are perceptual in nature, i.e., the radiologist does not "see" the abnormality on the imaging exam, perhaps due to poor conspicuity, satisfaction of search, or simply the "inexplicable psycho-visual phenomena of human perception." The remainder are cognitive errors: the radiologist sees an abnormality but fails to render a correct diagnoses by attaching the wrong significance to what is seen, perhaps due to inadequate knowledge, or an alliterative or judgmental error. Computer-assisted detection (CAD), a technology that for the past two decades has been utilized primarily in mammographic interpretation, increases sensitivity but at the same time decreases specificity; whether it reduces errors is debatable. Efforts to reduce diagnostic radiological errors continue, but the degree to which they will be successful remains to be determined.

  6. Grids in topographic maps reduce distortions in the recall of learned object locations.

    Science.gov (United States)

    Edler, Dennis; Bestgen, Anne-Kathrin; Kuchinke, Lars; Dickmann, Frank

    2014-01-01

    To date, it has been shown that cognitive map representations based on cartographic visualisations are systematically distorted. The grid is a traditional element of map graphics that has rarely been considered in research on perception-based spatial distortions. Grids do not only support the map reader in finding coordinates or locations of objects, they also provide a systematic structure for clustering visual map information ("spatial chunks"). The aim of this study was to examine whether different cartographic kinds of grids reduce spatial distortions and improve recall memory for object locations. Recall performance was measured as both the percentage of correctly recalled objects (hit rate) and the mean distance errors of correctly recalled objects (spatial accuracy). Different kinds of grids (continuous lines, dashed lines, crosses) were applied to topographic maps. These maps were also varied in their type of characteristic areas (LANDSCAPE) and different information layer compositions (DENSITY) to examine the effects of map complexity. The study involving 144 participants shows that all experimental cartographic factors (GRID, LANDSCAPE, DENSITY) improve recall performance and spatial accuracy of learned object locations. Overlaying a topographic map with a grid significantly reduces the mean distance errors of correctly recalled map objects. The paper includes a discussion of a square grid's usefulness concerning object location memory, independent of whether the grid is clearly visible (continuous or dashed lines) or only indicated by crosses.

  7. Trial Sequential Analysis in systematic reviews with meta-analysis

    Directory of Open Access Journals (Sweden)

    Jørn Wetterslev

    2017-03-01

    systematic reviews with traditional meta-analyses can be reduced using Trial Sequential Analysis. Several empirical studies have demonstrated that the Trial Sequential Analysis provides better control of type I errors and of type II errors than the traditional naïve meta-analysis. Conclusions Trial Sequential Analysis represents analysis of meta-analytic data, with transparent assumptions, and better control of type I and type II errors than the traditional meta-analysis using naïve unadjusted confidence intervals.

  8. Potential Errors and Test Assessment in Software Product Line Engineering

    Directory of Open Access Journals (Sweden)

    Hartmut Lackner

    2015-04-01

    Full Text Available Software product lines (SPL are a method for the development of variant-rich software systems. Compared to non-variable systems, testing SPLs is extensive due to an increasingly amount of possible products. Different approaches exist for testing SPLs, but there is less research for assessing the quality of these tests by means of error detection capability. Such test assessment is based on error injection into correct version of the system under test. However to our knowledge, potential errors in SPL engineering have never been systematically identified before. This article presents an overview over existing paradigms for specifying software product lines and the errors that can occur during the respective specification processes. For assessment of test quality, we leverage mutation testing techniques to SPL engineering and implement the identified errors as mutation operators. This allows us to run existing tests against defective products for the purpose of test assessment. From the results, we draw conclusions about the error-proneness of the surveyed SPL design paradigms and how quality of SPL tests can be improved.

  9. A Mobile Device App to Reduce Time to Drug Delivery and Medication Errors During Simulated Pediatric Cardiopulmonary Resuscitation: A Randomized Controlled Trial.

    Science.gov (United States)

    Siebert, Johan N; Ehrler, Frederic; Combescure, Christophe; Lacroix, Laurence; Haddad, Kevin; Sanchez, Oliver; Gervaix, Alain; Lovis, Christian; Manzano, Sergio

    2017-02-01

    During pediatric cardiopulmonary resuscitation (CPR), vasoactive drug preparation for continuous infusion is both complex and time-consuming, placing children at higher risk than adults for medication errors. Following an evidence-based ergonomic-driven approach, we developed a mobile device app called Pediatric Accurate Medication in Emergency Situations (PedAMINES), intended to guide caregivers step-by-step from preparation to delivery of drugs requiring continuous infusion. The aim of our study was to determine whether the use of PedAMINES reduces drug preparation time (TDP) and time to delivery (TDD; primary outcome), as well as medication errors (secondary outcomes) when compared with conventional preparation methods. The study was a randomized controlled crossover trial with 2 parallel groups comparing PedAMINES with a conventional and internationally used drugs infusion rate table in the preparation of continuous drug infusion. We used a simulation-based pediatric CPR cardiac arrest scenario with a high-fidelity manikin in the shock room of a tertiary care pediatric emergency department. After epinephrine-induced return of spontaneous circulation, pediatric emergency nurses were first asked to prepare a continuous infusion of dopamine, using either PedAMINES (intervention group) or the infusion table (control group), and second, a continuous infusion of norepinephrine by crossing the procedure. The primary outcome was the elapsed time in seconds, in each allocation group, from the oral prescription by the physician to TDD by the nurse. TDD included TDP. The secondary outcome was the medication dosage error rate during the sequence from drug preparation to drug injection. A total of 20 nurses were randomized into 2 groups. During the first study period, mean TDP while using PedAMINES and conventional preparation methods was 128.1 s (95% CI 102-154) and 308.1 s (95% CI 216-400), respectively (180 s reduction, P=.002). Mean TDD was 214 s (95% CI 171-256) and

  10. [Efficacy of motivational interviewing for reducing medication errors in chronic patients over 65 years with polypharmacy: Results of a cluster randomized trial].

    Science.gov (United States)

    Pérula de Torres, Luis Angel; Pulido Ortega, Laura; Pérula de Torres, Carlos; González Lama, Jesús; Olaya Caro, Inmaculada; Ruiz Moral, Roger

    2014-10-21

    To evaluate the effectiveness of an intervention based on motivational interviewing to reduce medication errors in chronic patients over 65 with polypharmacy. Cluster randomized trial that included doctors and nurses of 16 Primary Care centers and chronic patients with polypharmacy over 65 years. The professionals were assigned to the experimental or the control group using stratified randomization. Interventions consisted of training of professionals and revision of patient treatments, application of motivational interviewing in the experimental group and also the usual approach in the control group. The primary endpoint (medication error) was analyzed at individual level, and was estimated with the absolute risk reduction (ARR), relative risk reduction (RRR), number of subjects to treat (NNT) and by multiple logistic regression analysis. Thirty-two professionals were randomized (19 doctors and 13 nurses), 27 of them recruited 154 patients consecutively (13 professionals in the experimental group recruited 70 patients and 14 professionals recruited 84 patients in the control group) and completed 6 months of follow-up. The mean age of patients was 76 years (68.8% women). A decrease in the average of medication errors was observed along the period. The reduction was greater in the experimental than in the control group (F=5.109, P=.035). RRA 29% (95% confidence interval [95% CI] 15.0-43.0%), RRR 0.59 (95% CI:0.31-0.76), and NNT 3.5 (95% CI 2.3-6.8). Motivational interviewing is more efficient than the usual approach to reduce medication errors in patients over 65 with polypharmacy. Copyright © 2013 Elsevier España, S.L.U. All rights reserved.

  11. Speech Recognition for Medical Dictation: Overview in Quebec and Systematic Review.

    Science.gov (United States)

    Poder, Thomas G; Fisette, Jean-François; Déry, Véronique

    2018-04-03

    Speech recognition is increasingly used in medical reporting. The aim of this article is to identify in the literature the strengths and weaknesses of this technology, as well as barriers to and facilitators of its implementation. A systematic review of systematic reviews was performed using PubMed, Scopus, the Cochrane Library and the Center for Reviews and Dissemination through August 2017. The gray literature has also been consulted. The quality of systematic reviews has been assessed with the AMSTAR checklist. The main inclusion criterion was use of speech recognition for medical reporting (front-end or back-end). A survey has also been conducted in Quebec, Canada, to identify the dissemination of this technology in this province, as well as the factors leading to the success or failure of its implementation. Five systematic reviews were identified. These reviews indicated a high level of heterogeneity across studies. The quality of the studies reported was generally poor. Speech recognition is not as accurate as human transcription, but it can dramatically reduce turnaround times for reporting. In front-end use, medical doctors need to spend more time on dictation and correction than required with human transcription. With speech recognition, major errors occur up to three times more frequently. In back-end use, a potential increase in productivity of transcriptionists was noted. In conclusion, speech recognition offers several advantages for medical reporting. However, these advantages are countered by an increased burden on medical doctors and by risks of additional errors in medical reports. It is also hard to identify for which medical specialties and which clinical activities the use of speech recognition will be the most beneficial.

  12. Adherence to Mediterranean Diet Reduces Incident Frailty Risk: Systematic Review and Meta-Analysis.

    Science.gov (United States)

    Kojima, Gotaro; Avgerinou, Christina; Iliffe, Steve; Walters, Kate

    2018-04-01

    To conduct a systematic review of the literature on prospective cohort studies examining associations between adherence to a Mediterranean diet and incident frailty and to perform a meta-analysis to synthesize the pooled risk estimates. Systematic review and meta-analysis. Embase, MEDLINE, CINAHL, PsycINFO, and Cochrane Library were systematically searched on September 14, 2017. We reviewed references of included studies and relevant review papers and performed forward citation tracking for additional studies. Corresponding authors were contacted for additional data necessary for a meta-analysis. Community-dwelling older adults (mean age ≥60). Incident frailty risk according to adherence to a Mediterranean diet. Two reviewers independently screened the title, abstract, and full text to ascertain the eligibility of 125 studies that the systematic search of the literature identified, and four studies were included (5,789 older people with mean follow-up of 3.9 years). Two reviewers extracted data from the studies independently. All four studies provided adjusted odds ratios (ORs) of incident frailty risk according to three Mediterranean diet score (MDS) groups (0-3, 4-5, and 6-9). Greater adherence to a Mediterranean diet was associated with significantly lower incident frailty risk (pooled OR = 0.62, 95% CI = 0.47-0.82, P = .001 for MDS 4-5; pooled OR = 0.44, 95% CI = 0.31-0.64, P Mediterranean diet is associated with significantly lower risk of incident frailty in community-dwelling older people. Future studies should confirm these findings and evaluate whether adherence to a Mediterranean diet can reduce the risk of frailty, including in non-Mediterranean populations. © 2018, Copyright the Authors Journal compilation © 2018, The American Geriatrics Society.

  13. Competition increases binding errors in visual working memory.

    Science.gov (United States)

    Emrich, Stephen M; Ferber, Susanne

    2012-04-20

    When faced with maintaining multiple objects in visual working memory, item information must be bound to the correct object in order to be correctly recalled. Sometimes, however, binding errors occur, and participants report the feature (e.g., color) of an unprobed, non-target item. In the present study, we examine whether the configuration of sample stimuli affects the proportion of these binding errors. The results demonstrate that participants mistakenly report the identity of the unprobed item (i.e., they make a non-target response) when sample items are presented close together in space, suggesting that binding errors can increase independent of increases in memory load. Moreover, the proportion of these non-target responses is linearly related to the distance between sample items, suggesting that these errors are spatially specific. Finally, presenting sample items sequentially decreases non-target responses, suggesting that reducing competition between sample stimuli reduces the number of binding errors. Importantly, these effects all occurred without increases in the amount of error in the memory representation. These results suggest that competition during encoding can account for some of the binding errors made during VWM recall.

  14. Error field considerations for BPX

    International Nuclear Information System (INIS)

    LaHaye, R.J.

    1992-01-01

    Irregularities in the position of poloidal and/or toroidal field coils in tokamaks produce resonant toroidal asymmetries in the vacuum magnetic fields. Otherwise stable tokamak discharges become non-linearly unstable to disruptive locked modes when subjected to low level error fields. Because of the field errors, magnetic islands are produced which would not otherwise occur in tearing mode table configurations; a concomitant reduction of the total confinement can result. Poloidal and toroidal asymmetries arise in the heat flux to the divertor target. In this paper, the field errors from perturbed BPX coils are used in a field line tracing code of the BPX equilibrium to study these deleterious effects. Limits on coil irregularities for device design and fabrication are computed along with possible correcting coils for reducing such field errors

  15. Evaluation of a breath-motion-correction technique in reducing measurement error in hepatic CT perfusion imaging

    International Nuclear Information System (INIS)

    He Wei; Liu Jianyu; Li Xuan; Li Jianying; Liao Jingmin

    2009-01-01

    Objective: To evaluate the effect of a breath-motion-correction (BMC) technique in reducing measurement error of the time-density curve (TDC) in hepatic CT perfusion imaging. Methods: Twenty-five patients with suspected liver diseases underwent hepatic CT perfusion scans. The right branch of portal vein was selected as the anatomy of interest and performed BMC to realign image slices for the TDC according to the rule of minimizing the temporal changes of overall structures. Ten ROIs was selected on the right branch of portal vein to generate 10 TDCs each with and without BMC. The values of peak enhancement and the time-to-peak enhancement for each TDC were measured. The coefficients of variation (CV) of peak enhancement and the time-to-peak enhancement were calculated for each patient with and without BMC. Wilcoxon signed ranks test was used to evaluate the difference between the CV of the two parameters obtained with and without BMC. Independent-samples t test was used to evaluate the difference between the values of peak enhancement obtained with and without BMC. Results: The median (quartiles) of CV of peak enhancement with BMC [2.84% (2.10%, 4.57%)] was significantly lower than that without BMC [5.19% (3.90%, 7.27%)] (Z=-3.108,P<0.01). The median (quartiles) of CV of time-to-peak enhancement with BMC [2.64% (0.76%, 4.41%)] was significantly lower than that without BMC [5.23% (3.81%, 7.43%)] (Z=-3.924, P<0.01). In 8 cases, TDC demonstrated statistically significant higher peak enhancement with BMC (P<0.05). Conclusion: By applying the BMC technique we can effectively reduce measurement error for parameters of the TDC in hepatic CT perfusion imaging. (authors)

  16. Hospital medication errors in a pharmacovigilance system in Colombia

    Directory of Open Access Journals (Sweden)

    Jorge Enrique Machado-Alba

    2015-11-01

    Full Text Available Objective: this study analyzes the medication errors reported to a pharmacovigilance system by 26 hospitals for patients in the healthcare system of Colombia. Methods: this retrospective study analyzed the medication errors reported to a systematized database between 1 January 2008 and 12 September 2013. The medication is dispensed by the company Audifarma S.A. to hospitals and clinics around Colombia. Data were classified according to the taxonomy of the National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP. The data analysis was performed using SPSS 22.0 for Windows, considering p-values < 0.05 significant. Results: there were 9 062 medication errors in 45 hospital pharmacies. Real errors accounted for 51.9% (n = 4 707, of which 12.0% (n = 567 reached the patient (Categories C to I and caused harm (Categories E to I to 17 subjects (0.36%. The main process involved in errors that occurred (categories B to I was prescription (n = 1 758, 37.3%, followed by dispensation (n = 1 737, 36.9%, transcription (n = 970, 20.6% and administration (n = 242, 5.1%. The errors in the administration process were 45.2 times more likely to reach the patient (CI 95%: 20.2–100.9. Conclusions: medication error reporting systems and prevention strategies should be widespread in hospital settings, prioritizing efforts to address the administration process.

  17. Reducing errors in aircraft atmospheric inversion estimates of point-source emissions: the Aliso Canyon natural gas leak as a natural tracer experiment

    Science.gov (United States)

    Gourdji, S. M.; Yadav, V.; Karion, A.; Mueller, K. L.; Conley, S.; Ryerson, T.; Nehrkorn, T.; Kort, E. A.

    2018-04-01

    Urban greenhouse gas (GHG) flux estimation with atmospheric measurements and modeling, i.e. the ‘top-down’ approach, can potentially support GHG emission reduction policies by assessing trends in surface fluxes and detecting anomalies from bottom-up inventories. Aircraft-collected GHG observations also have the potential to help quantify point-source emissions that may not be adequately sampled by fixed surface tower-based atmospheric observing systems. Here, we estimate CH4 emissions from a known point source, the Aliso Canyon natural gas leak in Los Angeles, CA from October 2015–February 2016, using atmospheric inverse models with airborne CH4 observations from twelve flights ≈4 km downwind of the leak and surface sensitivities from a mesoscale atmospheric transport model. This leak event has been well-quantified previously using various methods by the California Air Resources Board, thereby providing high confidence in the mass-balance leak rate estimates of (Conley et al 2016), used here for comparison to inversion results. Inversions with an optimal setup are shown to provide estimates of the leak magnitude, on average, within a third of the mass balance values, with remaining errors in estimated leak rates predominantly explained by modeled wind speed errors of up to 10 m s‑1, quantified by comparing airborne meteorological observations with modeled values along the flight track. An inversion setup using scaled observational wind speed errors in the model-data mismatch covariance matrix is shown to significantly reduce the influence of transport model errors on spatial patterns and estimated leak rates from the inversions. In sum, this study takes advantage of a natural tracer release experiment (i.e. the Aliso Canyon natural gas leak) to identify effective approaches for reducing the influence of transport model error on atmospheric inversions of point-source emissions, while suggesting future potential for integrating surface tower and

  18. Random measurement error: Why worry? An example of cardiovascular risk factors.

    Science.gov (United States)

    Brakenhoff, Timo B; van Smeden, Maarten; Visseren, Frank L J; Groenwold, Rolf H H

    2018-01-01

    With the increased use of data not originally recorded for research, such as routine care data (or 'big data'), measurement error is bound to become an increasingly relevant problem in medical research. A common view among medical researchers on the influence of random measurement error (i.e. classical measurement error) is that its presence leads to some degree of systematic underestimation of studied exposure-outcome relations (i.e. attenuation of the effect estimate). For the common situation where the analysis involves at least one exposure and one confounder, we demonstrate that the direction of effect of random measurement error on the estimated exposure-outcome relations can be difficult to anticipate. Using three example studies on cardiovascular risk factors, we illustrate that random measurement error in the exposure and/or confounder can lead to underestimation as well as overestimation of exposure-outcome relations. We therefore advise medical researchers to refrain from making claims about the direction of effect of measurement error in their manuscripts, unless the appropriate inferential tools are used to study or alleviate the impact of measurement error from the analysis.

  19. Error-related brain activity and error awareness in an error classification paradigm.

    Science.gov (United States)

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Study protocol: the empirical investigation of methods to correct for measurement error in biobanks with dietary assessment

    Directory of Open Access Journals (Sweden)

    Masson Lindsey F

    2011-10-01

    Full Text Available Abstract Background The Public Population Project in Genomics (P3G is an organisation that aims to promote collaboration between researchers in the field of population-based genomics. The main objectives of P3G are to encourage collaboration between researchers and biobankers, optimize study design, promote the harmonization of information use in biobanks, and facilitate transfer of knowledge between interested parties. The importance of calibration and harmonisation of methods for environmental exposure assessment to allow pooling of data across studies in the evaluation of gene-environment interactions has been recognised by P3G, which has set up a methodological group on calibration with the aim of; 1 reviewing the published methodological literature on measurement error correction methods with assumptions and methods of implementation; 2 reviewing the evidence available from published nutritional epidemiological studies that have used a calibration approach; 3 disseminating information in the form of a comparison chart on approaches to perform calibration studies and how to obtain correction factors in order to support research groups collaborating within the P3G network that are unfamiliar with the methods employed; 4 with application to the field of nutritional epidemiology, including gene-diet interactions, ultimately developing a inventory of the typical correction factors for various nutrients. Methods/Design Systematic review of (a the methodological literature on methods to correct for measurement error in epidemiological studies; and (b studies that have been designed primarily to investigate the association between diet and disease and have also corrected for measurement error in dietary intake. Discussion The conduct of a systematic review of the methodological literature on calibration will facilitate the evaluation of methods to correct for measurement error and the design of calibration studies for the prospective pooling of

  1. Taylor-series and Monte-Carlo-method uncertainty estimation of the width of a probability distribution based on varying bias and random error

    International Nuclear Information System (INIS)

    Wilson, Brandon M; Smith, Barton L

    2013-01-01

    Uncertainties are typically assumed to be constant or a linear function of the measured value; however, this is generally not true. Particle image velocimetry (PIV) is one example of a measurement technique that has highly nonlinear, time varying local uncertainties. Traditional uncertainty methods are not adequate for the estimation of the uncertainty of measurement statistics (mean and variance) in the presence of nonlinear, time varying errors. Propagation of instantaneous uncertainty estimates into measured statistics is performed allowing accurate uncertainty quantification of time-mean and statistics of measurements such as PIV. It is shown that random errors will always elevate the measured variance, and thus turbulent statistics such as u'u'-bar. Within this paper, nonlinear, time varying errors are propagated from instantaneous measurements into the measured mean and variance using the Taylor-series method. With these results and knowledge of the systematic and random uncertainty of each measurement, the uncertainty of the time-mean, the variance and covariance can be found. Applicability of the Taylor-series uncertainty equations to time varying systematic and random errors and asymmetric error distributions are demonstrated with Monte-Carlo simulations. The Taylor-series uncertainty estimates are always accurate for uncertainties on the mean quantity. The Taylor-series variance uncertainty is similar to the Monte-Carlo results for cases in which asymmetric random errors exist or the magnitude of the instantaneous variations in the random and systematic errors is near the ‘true’ variance. However, the Taylor-series method overpredicts the uncertainty in the variance as the instantaneous variations of systematic errors are large or are on the same order of magnitude as the ‘true’ variance. (paper)

  2. Pediatric hospital discharge interventions to reduce subsequent utilization: a systematic review.

    Science.gov (United States)

    Auger, Katherine A; Kenyon, Chén C; Feudtner, Chris; Davis, Matthew M

    2014-04-01

    Reducing avoidable readmission and posthospitalization emergency department (ED) utilization has become a focus of quality-of-care measures and initiatives. For pediatric patients, no systematic efforts have assessed the evidence for interventions to reduce these events. We sought to synthesize existing evidence on pediatric discharge practices and interventions to reduce hospital readmission and posthospitalization ED utilization. PubMed and the Cumulative Index to Nursing and Allied Health Literature. Studies available in English involving pediatric inpatient discharge interventions with at least 1 outcome of interest were included. We utilized a modified Cochrane Good Practice data extraction tool and assessed study quality with the Downs and Black tool. Our search identified a total of 1296 studies, 14 of which met full inclusion criteria. All included studies examined multifaceted discharge interventions initiated in the inpatient setting. Overall, 2 studies demonstrated statistically significant reductions in both readmissions and subsequent ED visits, 4 studies demonstrated statistically significant reductions in either readmissions or ED visits, and 2 studies found statistically significant increases in subsequent utilization. Several studies were not sufficiently powered to detect changes in either subsequent utilization outcome measure. Interventions that demonstrated reductions in subsequent utilization targeted children with specific chronic conditions, providing enhanced inpatient feedback and education reinforced with postdischarge support. Interventions seeking to reduce subsequent utilization should identify an individual or team to assume responsibility for the inpatient-to-outpatient transition and offer ongoing support to the family via telephone or home visitation following discharge. © 2013 Society of Hospital Medicine.

  3. Training situational awareness to reduce surgical errors in the operating room

    NARCIS (Netherlands)

    Graafland, M.; Schraagen, J.M.C.; Boermeester, M.A.; Bemelman, W.A.; Schijven, M.P.

    2015-01-01

    Background: Surgical errors result from faulty decision-making, misperceptions and the application of suboptimal problem-solving strategies, just as often as they result from technical failure. To date, surgical training curricula have focused mainly on the acquisition of technical skills. The aim

  4. Modeling Systematic Error Effects for a Sensitive Storage Ring EDM Polarimeter

    Science.gov (United States)

    Stephenson, Edward; Imig, Astrid

    2009-10-01

    The Storage Ring EDM Collaboration has obtained a set of measurements detailing the sensitivity of a storage ring polarimeter for deuterons to small geometrical and rate changes. Various schemes, such as the calculation of the cross ratio [1], can cancel effects due to detector acceptance differences and luminosity differences for states of opposite polarization. Such schemes fail at second-order in the errors, becoming sensitive to geometrical changes, polarization magnitude differences between opposite polarization states, and changes to the detector response with changing data rates. An expansion of the polarimeter response in a Taylor series based on small errors about the polarimeter operating point can parametrize such effects, primarily in terms of the logarithmic derivatives of the cross section and analyzing power. A comparison will be made to measurements obtained with the EDDA detector at COSY-J"ulich. [4pt] [1] G.G. Ohlsen and P.W. Keaton, Jr., NIM 109, 41 (1973).

  5. Analysis of systematic error deviation of water temperature measurement at the fuel channel outlet of the reactor Maria

    International Nuclear Information System (INIS)

    Bykowski, W.

    2000-01-01

    The reactor Maria has two primary cooling circuits; fuel channels cooling circuit and reactor pool cooling circuit. Fuel elements are placed inside the fuel channels which are parallely linked in parallel, between the collectors. In the course of reactor operation the following measurements are performed: continuous measurement of water temperature at the fuel channels inlet, continuous measurement of water temperature at the outlet of each fuel channel and continuous measurement of water flow rate through each fuel channel. Based on those thermal-hydraulic parameters the instantaneous thermal power generated in each fuel channel is determined and by use of that value the thermal balance and the degree of fuel burnup is assessed. The work contains an analysis concerning estimate of the systematic error of temperature measurement at outlet of each fuel channel and so the erroneous assessment of thermal power extracted in each fuel channel and the burnup degree for the individual fuel element. The results of measurements of separate factors of deviations for the fuel channels are enclosed. (author)

  6. Laboratory errors and patient safety.

    Science.gov (United States)

    Miligy, Dawlat A

    2015-01-01

    Laboratory data are extensively used in medical practice; consequently, laboratory errors have a tremendous impact on patient safety. Therefore, programs designed to identify and reduce laboratory errors, as well as, setting specific strategies are required to minimize these errors and improve patient safety. The purpose of this paper is to identify part of the commonly encountered laboratory errors throughout our practice in laboratory work, their hazards on patient health care and some measures and recommendations to minimize or to eliminate these errors. Recording the encountered laboratory errors during May 2008 and their statistical evaluation (using simple percent distribution) have been done in the department of laboratory of one of the private hospitals in Egypt. Errors have been classified according to the laboratory phases and according to their implication on patient health. Data obtained out of 1,600 testing procedure revealed that the total number of encountered errors is 14 tests (0.87 percent of total testing procedures). Most of the encountered errors lay in the pre- and post-analytic phases of testing cycle (representing 35.7 and 50 percent, respectively, of total errors). While the number of test errors encountered in the analytic phase represented only 14.3 percent of total errors. About 85.7 percent of total errors were of non-significant implication on patients health being detected before test reports have been submitted to the patients. On the other hand, the number of test errors that have been already submitted to patients and reach the physician represented 14.3 percent of total errors. Only 7.1 percent of the errors could have an impact on patient diagnosis. The findings of this study were concomitant with those published from the USA and other countries. This proves that laboratory problems are universal and need general standardization and bench marking measures. Original being the first data published from Arabic countries that

  7. Discretisation errors in Landau gauge on the lattice

    International Nuclear Information System (INIS)

    Bonnet DR, Frederic; Bowman O, Patrick; Leinweber B, Derek; Williams G, Anthony; Richards G, David G.

    1999-01-01

    Lattice discretization errors in the Landau gauge condition are examined. An improved gauge fixing algorithm in which O(a 2 ) errors are removed is presented. O(a 2 ) improvement of the gauge fixing condition improves comparison with continuum Landau gauge in two ways: (1) through the elimination of O(a 2 ) errors and (2) through a secondary effect of reducing the size of higher-order errors. These results emphasize the importance of implementing an improved gauge fixing condition

  8. Evaluating the effects of modeling errors for isolated finite three-dimensional targets

    Science.gov (United States)

    Henn, Mark-Alexander; Barnes, Bryan M.; Zhou, Hui

    2017-10-01

    Optical three-dimensional (3-D) nanostructure metrology utilizes a model-based metrology approach to determine critical dimensions (CDs) that are well below the inspection wavelength. Our project at the National Institute of Standards and Technology is evaluating how to attain key CD and shape parameters from engineered in-die capable metrology targets. More specifically, the quantities of interest are determined by varying the input parameters for a physical model until the simulations agree with the actual measurements within acceptable error bounds. As in most applications, establishing a reasonable balance between model accuracy and time efficiency is a complicated task. A well-established simplification is to model the intrinsically finite 3-D nanostructures as either periodic or infinite in one direction, reducing the computationally expensive 3-D simulations to usually less complex two-dimensional (2-D) problems. Systematic errors caused by this simplified model can directly influence the fitting of the model to the measurement data and are expected to become more apparent with decreasing lengths of the structures. We identify these effects using selected simulation results and present experimental setups, e.g., illumination numerical apertures and focal ranges, that can increase the validity of the 2-D approach.

  9. Bayesian analysis of data and model error in rainfall-runoff hydrological models

    Science.gov (United States)

    Kavetski, D.; Franks, S. W.; Kuczera, G.

    2004-12-01

    A major unresolved issue in the identification and use of conceptual hydrologic models is realistic description of uncertainty in the data and model structure. In particular, hydrologic parameters often cannot be measured directly and must be inferred (calibrated) from observed forcing/response data (typically, rainfall and runoff). However, rainfall varies significantly in space and time, yet is often estimated from sparse gauge networks. Recent work showed that current calibration methods (e.g., standard least squares, multi-objective calibration, generalized likelihood uncertainty estimation) ignore forcing uncertainty and assume that the rainfall is known exactly. Consequently, they can yield strongly biased and misleading parameter estimates. This deficiency confounds attempts to reliably test model hypotheses, to generalize results across catchments (the regionalization problem) and to quantify predictive uncertainty when the hydrologic model is extrapolated. This paper continues the development of a Bayesian total error analysis (BATEA) methodology for the calibration and identification of hydrologic models, which explicitly incorporates the uncertainty in both the forcing and response data, and allows systematic model comparison based on residual model errors and formal Bayesian hypothesis testing (e.g., using Bayes factors). BATEA is based on explicit stochastic models for both forcing and response uncertainty, whereas current techniques focus solely on response errors. Hence, unlike existing methods, the BATEA parameter equations directly reflect the modeler's confidence in all the data. We compare several approaches to approximating the parameter distributions: a) full Markov Chain Monte Carlo methods and b) simplified approaches based on linear approximations. Studies using synthetic and real data from the US and Australia show that BATEA systematically reduces the parameter bias, leads to more meaningful model fits and allows model comparison taking

  10. Using Fault Trees to Advance Understanding of Diagnostic Errors.

    Science.gov (United States)

    Rogith, Deevakar; Iyengar, M Sriram; Singh, Hardeep

    2017-11-01

    Diagnostic errors annually affect at least 5% of adults in the outpatient setting in the United States. Formal analytic techniques are only infrequently used to understand them, in part because of the complexity of diagnostic processes and clinical work flows involved. In this article, diagnostic errors were modeled using fault tree analysis (FTA), a form of root cause analysis that has been successfully used in other high-complexity, high-risk contexts. How factors contributing to diagnostic errors can be systematically modeled by FTA to inform error understanding and error prevention is demonstrated. A team of three experts reviewed 10 published cases of diagnostic error and constructed fault trees. The fault trees were modeled according to currently available conceptual frameworks characterizing diagnostic error. The 10 trees were then synthesized into a single fault tree to identify common contributing factors and pathways leading to diagnostic error. FTA is a visual, structured, deductive approach that depicts the temporal sequence of events and their interactions in a formal logical hierarchy. The visual FTA enables easier understanding of causative processes and cognitive and system factors, as well as rapid identification of common pathways and interactions in a unified fashion. In addition, it enables calculation of empirical estimates for causative pathways. Thus, fault trees might provide a useful framework for both quantitative and qualitative analysis of diagnostic errors. Future directions include establishing validity and reliability by modeling a wider range of error cases, conducting quantitative evaluations, and undertaking deeper exploration of other FTA capabilities. Copyright © 2017 The Joint Commission. Published by Elsevier Inc. All rights reserved.

  11. Multimodal system designed to reduce errors in recording and administration of drugs in anaesthesia: prospective randomised clinical evaluation.

    Science.gov (United States)

    Merry, Alan F; Webster, Craig S; Hannam, Jacqueline; Mitchell, Simon J; Henderson, Robert; Reid, Papaarangi; Edwards, Kylie-Ellen; Jardim, Anisoara; Pak, Nick; Cooper, Jeremy; Hopley, Lara; Frampton, Chris; Short, Timothy G

    2011-09-22

    To clinically evaluate a new patented multimodal system (SAFERSleep) designed to reduce errors in the recording and administration of drugs in anaesthesia. Prospective randomised open label clinical trial. Five designated operating theatres in a major tertiary referral hospital. Eighty nine consenting anaesthetists managing 1075 cases in which there were 10,764 drug administrations. Use of the new system (which includes customised drug trays and purpose designed drug trolley drawers to promote a well organised anaesthetic workspace and aseptic technique; pre-filled syringes for commonly used anaesthetic drugs; large legible colour coded drug labels; a barcode reader linked to a computer, speakers, and touch screen to provide automatic auditory and visual verification of selected drugs immediately before each administration; automatic compilation of an anaesthetic record; an on-screen and audible warning if an antibiotic has not been administered within 15 minutes of the start of anaesthesia; and certain procedural rules-notably, scanning the label before each drug administration) versus conventional practice in drug administration with a manually compiled anaesthetic record. Primary: composite of errors in the recording and administration of intravenous drugs detected by direct observation and by detailed reconciliation of the contents of used drug vials against recorded administrations; and lapses in responding to an intermittent visual stimulus (vigilance latency task). Secondary: outcomes in patients; analyses of anaesthetists' tasks and assessments of workload; evaluation of the legibility of anaesthetic records; evaluation of compliance with the procedural rules of the new system; and questionnaire based ratings of the respective systems by participants. The overall mean rate of drug errors per 100 administrations was 9.1 (95% confidence interval 6.9 to 11.4) with the new system (one in 11 administrations) and 11.6 (9.3 to 13.9) with conventional methods (one

  12. Reducing Systematic Errors in Oxide Species with Density Functional Theory Calculations

    DEFF Research Database (Denmark)

    Christensen, Rune; Hummelshøj, Jens S.; Hansen, Heine Anton

    2015-01-01

    Density functional theory calculations can be used to gain valuable insight into the fundamental reaction processes in metal−oxygen systems, e.g., metal−oxygen batteries. Here, the ability of a range of different exchange-correlation functionals to reproduce experimental enthalpies of formation...

  13. Coping with human errors through system design: Implications for ecological interface design

    DEFF Research Database (Denmark)

    Rasmussen, Jens; Vicente, Kim J.

    1989-01-01

    Research during recent years has revealed that human errors are not stochastic events which can be removed through improved training programs or optimal interface design. Rather, errors tend to reflect either systematic interference between various models, rules, and schemata, or the effects...... of the adaptive mechanisms involved in learning. In terms of design implications, these findings suggest that reliable human-system interaction will be achieved by designing interfaces which tend to minimize the potential for control interference and support recovery from errors. In other words, the focus should...... be on control of the effects of errors rather than on the elimination of errors per se. In this paper, we propose a theoretical framework for interface design that attempts to satisfy these objectives. The goal of our framework, called ecological interface design, is to develop a meaningful representation...

  14. Tumor Localization Using Cone-Beam CT Reduces Setup Margins in Conventionally Fractionated Radiotherapy for Lung Tumors

    International Nuclear Information System (INIS)

    Yeung, Anamaria R.; Li, Jonathan G.; Shi Wenyin; Newlin, Heather E.; Chvetsov, Alexei; Liu, Chihray; Palta, Jatinder R.; Olivier, Kenneth

    2009-01-01

    Purpose: To determine whether setup margins can be reduced using cone-beam computed tomography (CBCT) to localize tumor in conventionally fractionated radiotherapy for lung tumors. Methods and Materials: A total of 22 lung cancer patients were treated with curative intent with conventionally fractionated radiotherapy using daily image guidance with CBCT. Of these, 13 lung cancer patients had sufficient CBCT scans for analysis (389 CBCT scans). The patients underwent treatment simulation in the BodyFix immobilization system using four-dimensional CT to account for respiratory motion. Daily alignment was first done according to skin tattoos, followed by CBCT. All 389 CBCT scans were retrospectively registered to the planning CT scans using automated soft-tissue and bony registration; the resulting couch shifts in three dimensions were recorded. Results: The daily alignment to skin tattoos with no image guidance resulted in systematic (Σ) and random (σ) errors of 3.2-5.6 mm and 2.0-3.5 mm, respectively. The margin required to account for the setup error introduced by aligning to skin tattoos with no image guidance was approximately 1-1.6 cm. The difference in the couch shifts obtained from the bone and soft-tissue registration resulted in systematic (Σ) and random (σ) errors of 1.5-4.1 mm and 1.8-5.3 mm, respectively. The margin required to account for the setup error introduced using bony anatomy as a surrogate for the target, instead of localizing the target itself, was 0.5-1.4 cm. Conclusion: Using daily CBCT soft-tissue registration to localize the tumor in conventionally fractionated radiotherapy reduced the required setup margin by up to approximately 1.5 cm compared with both no image guidance and image guidance using bony anatomy as a surrogate for the target.

  15. Search, Memory, and Choice Error: An Experiment.

    Science.gov (United States)

    Sanjurjo, Adam

    2015-01-01

    Multiple attribute search is a central feature of economic life: we consider much more than price when purchasing a home, and more than wage when choosing a job. An experiment is conducted in order to explore the effects of cognitive limitations on choice in these rich settings, in accordance with the predictions of a new model of search memory load. In each task, subjects are made to search the same information in one of two orders, which differ in predicted memory load. Despite standard models of choice treating such variations in order of acquisition as irrelevant, lower predicted memory load search orders are found to lead to substantially fewer choice errors. An implication of the result for search behavior, more generally, is that in order to reduce memory load (thus choice error) a limited memory searcher ought to deviate from the search path of an unlimited memory searcher in predictable ways-a mechanism that can explain the systematic deviations from optimal sequential search that have recently been discovered in peoples' behavior. Further, as cognitive load is induced endogenously (within the task), and found to affect choice behavior, this result contributes to the cognitive load literature (in which load is induced exogenously), as well as the cognitive ability literature (in which cognitive ability is measured in a separate task). In addition, while the information overload literature has focused on the detrimental effects of the quantity of information on choice, this result suggests that, holding quantity constant, the order that information is observed in is an essential determinant of choice failure.

  16. Search, Memory, and Choice Error: An Experiment.

    Directory of Open Access Journals (Sweden)

    Adam Sanjurjo

    Full Text Available Multiple attribute search is a central feature of economic life: we consider much more than price when purchasing a home, and more than wage when choosing a job. An experiment is conducted in order to explore the effects of cognitive limitations on choice in these rich settings, in accordance with the predictions of a new model of search memory load. In each task, subjects are made to search the same information in one of two orders, which differ in predicted memory load. Despite standard models of choice treating such variations in order of acquisition as irrelevant, lower predicted memory load search orders are found to lead to substantially fewer choice errors. An implication of the result for search behavior, more generally, is that in order to reduce memory load (thus choice error a limited memory searcher ought to deviate from the search path of an unlimited memory searcher in predictable ways-a mechanism that can explain the systematic deviations from optimal sequential search that have recently been discovered in peoples' behavior. Further, as cognitive load is induced endogenously (within the task, and found to affect choice behavior, this result contributes to the cognitive load literature (in which load is induced exogenously, as well as the cognitive ability literature (in which cognitive ability is measured in a separate task. In addition, while the information overload literature has focused on the detrimental effects of the quantity of information on choice, this result suggests that, holding quantity constant, the order that information is observed in is an essential determinant of choice failure.

  17. Use of FMEA analysis to reduce risk of errors in prescribing and administering drugs in paediatric wards: a quality improvement report.

    Science.gov (United States)

    Lago, Paola; Bizzarri, Giancarlo; Scalzotto, Francesca; Parpaiola, Antonella; Amigoni, Angela; Putoto, Giovanni; Perilongo, Giorgio

    2012-01-01

    Administering medication to hospitalised infants and children is a complex process at high risk of error. Failure mode and effect analysis (FMEA) is a proactive tool used to analyse risks, identify failures before they happen and prioritise remedial measures. To examine the hazards associated with the process of drug delivery to children, we performed a proactive risk-assessment analysis. Five multidisciplinary teams, representing different divisions of the paediatric department at Padua University Hospital, were trained to analyse the drug-delivery process, to identify possible causes of failures and their potential effects, to calculate a risk priority number (RPN) for each failure and plan changes in practices. To identify higher-priority potential failure modes as defined by RPNs and planning changes in clinical practice to reduce the risk of patients harm and improve safety in the process of medication use in children. In all, 37 higher-priority potential failure modes and 71 associated causes and effects were identified. The highest RPNs related (>48) mainly to errors in calculating drug doses and concentrations. Many of these failure modes were found in all the five units, suggesting the presence of common targets for improvement, particularly in enhancing the safety of prescription and preparation of endovenous drugs. The introductions of new activities in the revised process of administering drugs allowed reducing the high-risk failure modes of 60%. FMEA is an effective proactive risk-assessment tool useful to aid multidisciplinary groups in understanding a process care and identifying errors that may occur, prioritising remedial interventions and possibly enhancing the safety of drug delivery in children.

  18. Swath-altimetry measurements of the main stem Amazon River: measurement errors and hydraulic implications

    Science.gov (United States)

    Wilson, M. D.; Durand, M.; Jung, H. C.; Alsdorf, D.

    2015-04-01

    The Surface Water and Ocean Topography (SWOT) mission, scheduled for launch in 2020, will provide a step-change improvement in the measurement of terrestrial surface-water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water-surface elevations. In this paper, we aimed to (i) characterise and illustrate in two dimensions the errors which may be found in SWOT swath measurements of terrestrial surface water, (ii) simulate the spatio-temporal sampling scheme of SWOT for the Amazon, and (iii) assess the impact of each of these on estimates of water-surface slope and river discharge which may be obtained from SWOT imagery. We based our analysis on a virtual mission for a ~260 km reach of the central Amazon (Solimões) River, using a hydraulic model to provide water-surface elevations according to SWOT spatio-temporal sampling to which errors were added based on a two-dimensional height error spectrum derived from the SWOT design requirements. We thereby obtained water-surface elevation measurements for the Amazon main stem as may be observed by SWOT. Using these measurements, we derived estimates of river slope and discharge and compared them to those obtained directly from the hydraulic model. We found that cross-channel and along-reach averaging of SWOT measurements using reach lengths greater than 4 km for the Solimões and 7.5 km for Purus reduced the effect of systematic height errors, enabling discharge to be reproduced accurately from the water height, assuming known bathymetry and friction. Using cross-sectional averaging and 20 km reach lengths, results show Nash-Sutcliffe model efficiency values of 0.99 for the Solimões and 0.88 for the Purus, with 2.6 and 19.1 % average overall error in discharge, respectively. We extend the results to other rivers worldwide and infer that SWOT-derived discharge estimates may be more accurate for rivers with larger channel widths (permitting a greater level of cross

  19. Medication error detection in two major teaching hospitals: What are the types of errors?

    Directory of Open Access Journals (Sweden)

    Fatemeh Saghafi

    2014-01-01

    Full Text Available Background: Increasing number of reports on medication errors and relevant subsequent damages, especially in medical centers has become a growing concern for patient safety in recent decades. Patient safety and in particular, medication safety is a major concern and challenge for health care professionals around the world. Our prospective study was designed to detect prescribing, transcribing, dispensing, and administering medication errors in two major university hospitals. Materials and Methods: After choosing 20 similar hospital wards in two large teaching hospitals in the city of Isfahan, Iran, the sequence was randomly selected. Diagrams for drug distribution were drawn by the help of pharmacy directors. Direct observation technique was chosen as the method for detecting the errors. A total of 50 doses were studied in each ward to detect prescribing, transcribing and administering errors in each ward. The dispensing error was studied on 1000 doses dispensed in each hospital pharmacy. Results: A total of 8162 number of doses of medications were studied during the four stages, of which 8000 were complete data to be analyzed. 73% of prescribing orders were incomplete and did not have all six parameters (name, dosage form, dose and measuring unit, administration route, and intervals of administration. We found 15% transcribing errors. One-third of administration of medications on average was erroneous in both hospitals. Dispensing errors ranged between 1.4% and 2.2%. Conclusion: Although prescribing and administrating compromise most of the medication errors, improvements are needed in all four stages with regard to medication errors. Clear guidelines must be written and executed in both hospitals to reduce the incidence of medication errors.

  20. Error analysis of isotope dilution mass spectrometry method with internal standard

    International Nuclear Information System (INIS)

    Rizhinskii, M.W.; Vitinskii, M.Y.

    1989-02-01

    The computation algorithms of the normalized isotopic ratios and element concentration by isotope dilution mass spectrometry with internal standard are presented. A procedure based on the Monte-Carlo calculation is proposed for predicting the magnitude of the errors to be expected. The estimation of systematic and random errors is carried out in the case of the certification of uranium and plutonium reference materials as well as for the use of those reference materials in the analysis of irradiated nuclear fuels. 4 refs, 11 figs, 2 tabs

  1. Discretisation errors in Landau gauge on the lattice

    International Nuclear Information System (INIS)

    Bonnet, F.D.R.; Bowmen, P.O.; Leinweber, D.B.

    1999-01-01

    Lattice discretisation errors in the Landau gauge condition are examined. An improved gauge fixing algorithm in which O(a 2 ) errors are removed is presented. O(a 2 ) improvement of the gauge fixing condition improves comparison with the continuum Landau gauge in two ways: (1) through the elimination of O(a 2 ) errors and (2) through a secondary effect of reducing the size of higher-order errors. These results emphasise the importance of implementing an improved gauge fixing condition. Copyright (1999) CSIRO Australia

  2. [Medical errors: inevitable but preventable].

    Science.gov (United States)

    Giard, R W

    2001-10-27

    Medical errors are increasingly reported in the lay press. Studies have shown dramatic error rates of 10 percent or even higher. From a methodological point of view, studying the frequency and causes of medical errors is far from simple. Clinical decisions on diagnostic or therapeutic interventions are always taken within a clinical context. Reviewing outcomes of interventions without taking into account both the intentions and the arguments for a particular action will limit the conclusions from a study on the rate and preventability of errors. The interpretation of the preventability of medical errors is fraught with difficulties and probably highly subjective. Blaming the doctor personally does not do justice to the actual situation and especially the organisational framework. Attention for and improvement of the organisational aspects of error are far more important then litigating the person. To err is and will remain human and if we want to reduce the incidence of faults we must be able to learn from our mistakes. That requires an open attitude towards medical mistakes, a continuous effort in their detection, a sound analysis and, where feasible, the institution of preventive measures.

  3. Applications of human error analysis to aviation and space operations

    International Nuclear Information System (INIS)

    Nelson, W.R.

    1998-01-01

    For the past several years at the Idaho National Engineering and Environmental Laboratory (INEEL) we have been working to apply methods of human error analysis to the design of complex systems. We have focused on adapting human reliability analysis (HRA) methods that were developed for Probabilistic Safety Assessment (PSA) for application to system design. We are developing methods so that human errors can be systematically identified during system design, the potential consequences of each error can be assessed, and potential corrective actions (e.g. changes to system design or procedures) can be identified. These applications lead to different requirements when compared with HR.As performed as part of a PSA. For example, because the analysis will begin early during the design stage, the methods must be usable when only partial design information is available. In addition, the ability to perform numerous ''what if'' analyses to identify and compare multiple design alternatives is essential. Finally, since the goals of such human error analyses focus on proactive design changes rather than the estimate of failure probabilities for PRA, there is more emphasis on qualitative evaluations of error relationships and causal factors than on quantitative estimates of error frequency. The primary vehicle we have used to develop and apply these methods has been a series of prqjects sponsored by the National Aeronautics and Space Administration (NASA) to apply human error analysis to aviation operations. The first NASA-sponsored project had the goal to evaluate human errors caused by advanced cockpit automation. Our next aviation project focused on the development of methods and tools to apply human error analysis to the design of commercial aircraft. This project was performed by a consortium comprised of INEEL, NASA, and Boeing Commercial Airplane Group. The focus of the project was aircraft design and procedures that could lead to human errors during airplane maintenance

  4. A study of respiration-correlated cone-beam CT scans to correct target positioning errors in radiotherapy of thoracic cancer

    Energy Technology Data Exchange (ETDEWEB)

    Santoro, J. P.; McNamara, J.; Yorke, E.; Pham, H.; Rimner, A.; Rosenzweig, K. E.; Mageras, G. S. [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States); Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States); Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States)

    2012-10-15

    , seven required a single correction, one required two corrections, and one required three corrections. Mean residual GTV deviation (3D distance) following GTV-based systematic correction (mean {+-} 1 standard deviation 4.8 {+-} 1.5 mm) is significantly lower than for systematic skeletal-based (6.5 {+-} 2.9 mm, p= 0.015), and weekly skeletal-based correction (7.2 {+-} 3.0 mm, p= 0.001), but is not significantly lower than daily skeletal-based correction (5.4 {+-} 2.6 mm, p= 0.34). In two cases, first-day CBCT images reveal tumor changes-one showing tumor growth, the other showing large tumor displacement-that are not readily observed in radiographs. Differences in computed GTV deviations between respiration-correlated and respiration-averaged images are 0.2 {+-} 1.8 mm in the superior-inferior direction and are of similar magnitude in the other directions. Conclusions: An off-line protocol to correct GTV-based systematic error in locally advanced lung tumor cases can be effective at reducing tumor deviations, although the findings need confirmation with larger patient statistics. In some cases, a single cone-beam CT can be useful for assessing tumor changes early in treatment, if more than a few days elapse between simulation and the start of treatment. Tumor deviations measured with respiration-averaged CT and CBCT images are consistent with those measured with respiration-correlated images; the respiration-averaged method is more easily implemented in the clinic.

  5. A study of respiration-correlated cone-beam CT scans to correct target positioning errors in radiotherapy of thoracic cancer

    International Nuclear Information System (INIS)

    Santoro, J. P.; McNamara, J.; Yorke, E.; Pham, H.; Rimner, A.; Rosenzweig, K. E.; Mageras, G. S.

    2012-01-01

    , seven required a single correction, one required two corrections, and one required three corrections. Mean residual GTV deviation (3D distance) following GTV-based systematic correction (mean ± 1 standard deviation 4.8 ± 1.5 mm) is significantly lower than for systematic skeletal-based (6.5 ± 2.9 mm, p= 0.015), and weekly skeletal-based correction (7.2 ± 3.0 mm, p= 0.001), but is not significantly lower than daily skeletal-based correction (5.4 ± 2.6 mm, p= 0.34). In two cases, first-day CBCT images reveal tumor changes—one showing tumor growth, the other showing large tumor displacement—that are not readily observed in radiographs. Differences in computed GTV deviations between respiration-correlated and respiration-averaged images are 0.2 ± 1.8 mm in the superior-inferior direction and are of similar magnitude in the other directions. Conclusions: An off-line protocol to correct GTV-based systematic error in locally advanced lung tumor cases can be effective at reducing tumor deviations, although the findings need confirmation with larger patient statistics. In some cases, a single cone-beam CT can be useful for assessing tumor changes early in treatment, if more than a few days elapse between simulation and the start of treatment. Tumor deviations measured with respiration-averaged CT and CBCT images are consistent with those measured with respiration-correlated images; the respiration-averaged method is more easily implemented in the clinic.

  6. About errors, inaccuracies and sterotypes: Mistakes in media coverage - and how to reduce them

    Science.gov (United States)

    Scherzler, D.

    2010-12-01

    The main complaint made by scientists about the work of journalists is that there are mistakes and inaccuracies in TV programmes, radio or the print media. This seems to be an important reason why too few researchers want to deal with journalists. Such scientists regularly discover omissions, errors, exaggerations, distortions, stereotypes and sensationalism in the media. Surveys carried out on so-called accuracy research seem to concede this point as well. Errors frequently occur in journalism, and it is the task of the editorial offices to work very hard in order to keep the number of errors as low as possible. On closer inspection some errors, however, turn out to be simplifications and omissions. Both are obligatory in journalism and do not automatically cause factual errors. This paper examines the different kinds of mistakes and misleading information that scientists observe in the mass media. By giving a view from inside the mass media it tries to explain how errors come to exist in the journalist’s working routines. It outlines that the criteria of journalistic quality which scientists and science journalists apply differ substantially. The expectation of many scientists is that good science journalism passes on their results to the public in as “unadulterated” a form as possible. The author suggests, however, that quality criteria for journalism cannot be derived from how true to detail and how comprehensively it reports on science, nor to what extent the journalistic presentation is “correct” in the eyes of the researcher. The paper suggests in its main part that scientists who are contacted or interviewed by the mass media should not accept that errors just happen. On the contrary, they can do a lot to help preventing mistakes that might occur in the journalistic product. The author proposes several strategies how scientists and press information officers could identify possible errors, stereotypes and exaggeration by journalists in advance and

  7. Systematic shifts of evaluated charge centroid for the cathode read-out multiwire proportional chamber

    International Nuclear Information System (INIS)

    Endo, I.; Kawamoto, T.; Mizuno, Y.; Ohsugi, T.; Taniguchi, T.; Takeshita, T.

    1981-01-01

    We have investigated the systematic error associtated with the charge centroid evaluation for the cathode read-out multiwire proportional chamber. Correction curves for the systematic error according to six centroid finding algorithms have been obtained by using the charge distribution calculated in a simple electrostatic mode. They have been experimentally examined and proved to be essential for the accurate determination of the irradiated position. (orig.)

  8. The effect of biomechanical variables on force sensitive resistor error: Implications for calibration and improved accuracy.

    Science.gov (United States)

    Schofield, Jonathon S; Evans, Katherine R; Hebert, Jacqueline S; Marasco, Paul D; Carey, Jason P

    2016-03-21

    Force Sensitive Resistors (FSRs) are commercially available thin film polymer sensors commonly employed in a multitude of biomechanical measurement environments. Reasons for such wide spread usage lie in the versatility, small profile, and low cost of these sensors. Yet FSRs have limitations. It is commonly accepted that temperature, curvature and biological tissue compliance may impact sensor conductance and resulting force readings. The effect of these variables and degree to which they interact has yet to be comprehensively investigated and quantified. This work systematically assesses varying levels of temperature, sensor curvature and surface compliance using a full factorial design-of-experiments approach. Three models of Interlink FSRs were evaluated. Calibration equations under 12 unique combinations of temperature, curvature and compliance were determined for each sensor. Root mean squared error, mean absolute error, and maximum error were quantified as measures of the impact these thermo/mechanical factors have on sensor performance. It was found that all three variables have the potential to affect FSR calibration curves. The FSR model and corresponding sensor geometry are sensitive to these three mechanical factors at varying levels. Experimental results suggest that reducing sensor error requires calibration of each sensor in an environment as close to its intended use as possible and if multiple FSRs are used in a system, they must be calibrated independently. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. (How) do we learn from errors? A prospective study of the link between the ward's learning practices and medication administration errors.

    Science.gov (United States)

    Drach-Zahavy, A; Somech, A; Admi, H; Peterfreund, I; Peker, H; Priente, O

    2014-03-01

    Attention in the ward should shift from preventing medication administration errors to managing them. Nevertheless, little is known in regard with the practices nursing wards apply to learn from medication administration errors as a means of limiting them. To test the effectiveness of four types of learning practices, namely, non-integrated, integrated, supervisory and patchy learning practices in limiting medication administration errors. Data were collected from a convenient sample of 4 hospitals in Israel by multiple methods (observations and self-report questionnaires) at two time points. The sample included 76 wards (360 nurses). Medication administration error was defined as any deviation from prescribed medication processes and measured by a validated structured observation sheet. Wards' use of medication administration technologies, location of the medication station, and workload were observed; learning practices and demographics were measured by validated questionnaires. Results of the mixed linear model analysis indicated that the use of technology and quiet location of the medication cabinet were significantly associated with reduced medication administration errors (estimate=.03, perrors (estimate=.04, plearning practices, supervisory learning was the only practice significantly linked to reduced medication administration errors (estimate=-.04, plearning were significantly linked to higher levels of medication administration errors (estimate=-.03, plearning was not associated with it (p>.05). How wards manage errors might have implications for medication administration errors beyond the effects of typical individual, organizational and technology risk factors. Head nurse can facilitate learning from errors by "management by walking around" and monitoring nurses' medication administration behaviors. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Addressing the Problem of Negative Lexical Transfer Errors in Chilean University Students

    Directory of Open Access Journals (Sweden)

    Paul Anthony Dissington

    2018-01-01

    Full Text Available Studies of second language learning have revealed a connection between first language transfer and errors in second language production. This paper describes an action research study carried out among Chilean university students studying English as part of their degree programmes. The study focuses on common lexical errors made by Chilean Spanish-speakers due to negative first language transfer and aims to analyse the effects of systematic instruction and practice of this problematic lexis. It is suggested that raising awareness of lexical transfer through focused attention on common transfer errors is valued by students and seems essential for learners to achieve productive mastery.

  11. Systematic Biases in Parameter Estimation of Binary Black-Hole Mergers

    Science.gov (United States)

    Littenberg, Tyson B.; Baker, John G.; Buonanno, Alessandra; Kelly, Bernard J.

    2012-01-01

    Parameter estimation of binary-black-hole merger events in gravitational-wave data relies on matched filtering techniques, which, in turn, depend on accurate model waveforms. Here we characterize the systematic biases introduced in measuring astrophysical parameters of binary black holes by applying the currently most accurate effective-one-body templates to simulated data containing non-spinning numerical-relativity waveforms. For advanced ground-based detectors, we find that the systematic biases are well within the statistical error for realistic signal-to-noise ratios (SNR). These biases grow to be comparable to the statistical errors at high signal-to-noise ratios for ground-based instruments (SNR approximately 50) but never dominate the error budget. At the much larger signal-to-noise ratios expected for space-based detectors, these biases will become large compared to the statistical errors but are small enough (at most a few percent in the black-hole masses) that we expect they should not affect broad astrophysical conclusions that may be drawn from the data.

  12. Non-intercepted dose errors in prescribing anti-neoplastic treatment

    DEFF Research Database (Denmark)

    Mattsson, T O; Holm, B; Michelsen, H

    2015-01-01

    BACKGROUND: The incidence of non-intercepted prescription errors and the risk factors involved, including the impact of computerised order entry (CPOE) systems on such errors, are unknown. Our objective was to determine the incidence, type, severity, and related risk factors of non-intercepted pr....... Strategies to prevent future prescription errors could usefully focus on integrated computerised systems that can aid dose calculations and reduce transcription errors between databases....

  13. Consumer acceptance of reformulated food products: A systematic review and meta-analysis of salt-reduced foods.

    Science.gov (United States)

    Jaenke, Rachael; Barzi, Federica; McMahon, Emma; Webster, Jacqui; Brimblecombe, Julie

    2017-11-02

    Food product reformulation is promoted as an effective strategy to reduce population salt intake and address the associated burden of chronic disease. Salt has a number of functions in food processing, including impacting upon physical and sensory properties. Manufacturers must ensure that reformulation of foods to reduce salt does not compromise consumer acceptability. The aim of this systematic review is to determine to what extent foods can be reduced in salt without detrimental effect on consumer acceptability. Fifty studies reported on salt reduction, replacement or compensation in processed meats, breads, cheeses, soups, and miscellaneous products. For each product category, levels of salt reduction were collapsed into four groups: food products, which in turn will contribute to a healthier food supply.

  14. Analysis of Factors and Medical Errors Involved in Patient Complaints in a European Emergency Department

    Directory of Open Access Journals (Sweden)

    Pauline Haroutunian

    2017-12-01

    Full Text Available Introduction: Patients’ complaints from Emergency Departments (ED are frequent and can be used as a quality assurance indicator. Objective: Factors contributing to patients’ complaints (PCs in the emergency department were analyzed.  Methods: It was a retrospective cohort study, the qualitative variables of patients’ complaints visiting ED of a university hospital were compared with Chi-Square and t test tests. Results: Eighty-five PC were analyzed. The factors contributing to PC were: communication (n=26, length of stay (LOS (n=24, diagnostic errors (n=21, comfort and privacy issues (n=7, pain management (n=6, inappropriate treatment (n=6, delay of care and billing issues (n=3. PCs were more frequent when patients were managed by residents, during night shifts, weekends, Saturdays, Mondays, January and June. Moreover, the factors contributing to diagnostic errors were due to poor communication, non-adherence to guidelines and lack of systematic proofreading of X-rays. In 98% of cases, disputes were resolved by apology and explanation and three cases resulted in financial compensation. Conclusion: Poor communication, LOS and medical errors are factors contributing to PCs. Improving communication, resolving issues leading to slow health care provision, adequate staffing and supervision of trainees may reduce PCs.

  15. Lower vs. higher fluid volumes in sepsis-protocol for a systematic review with meta-analysis

    DEFF Research Database (Denmark)

    Meyhoff, T S; Møller, M H; Hjortrup, P B

    2017-01-01

    sequential analysis of randomised clinical trials comparing different strategies to obtain separation in fluid volumes or balances during resuscitation of adult patients with sepsis. We will systematically search the Cochrane Library, MEDLINE, EMBASE, Science Citation Index, BIOSIS and Epistemonikos...... for relevant literature. We will follow the recommendations by the Cochrane Collaboration and the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) statement. The risk of systematic errors (bias) and random errors will be assessed, and the overall quality of evidence will be evaluated...

  16. Errors in fracture diagnoses in the emergency department--characteristics of patients and diurnal variation

    DEFF Research Database (Denmark)

    Hallas, Peter; Ellingsen, Trond

    2006-01-01

    Evaluation of the circumstances related to errors in diagnosis of fractures at an Emergency Department may suggest ways to reduce the incidence of such errors.......Evaluation of the circumstances related to errors in diagnosis of fractures at an Emergency Department may suggest ways to reduce the incidence of such errors....

  17. Random measurement error: Why worry? An example of cardiovascular risk factors.

    Directory of Open Access Journals (Sweden)

    Timo B Brakenhoff

    Full Text Available With the increased use of data not originally recorded for research, such as routine care data (or 'big data', measurement error is bound to become an increasingly relevant problem in medical research. A common view among medical researchers on the influence of random measurement error (i.e. classical measurement error is that its presence leads to some degree of systematic underestimation of studied exposure-outcome relations (i.e. attenuation of the effect estimate. For the common situation where the analysis involves at least one exposure and one confounder, we demonstrate that the direction of effect of random measurement error on the estimated exposure-outcome relations can be difficult to anticipate. Using three example studies on cardiovascular risk factors, we illustrate that random measurement error in the exposure and/or confounder can lead to underestimation as well as overestimation of exposure-outcome relations. We therefore advise medical researchers to refrain from making claims about the direction of effect of measurement error in their manuscripts, unless the appropriate inferential tools are used to study or alleviate the impact of measurement error from the analysis.

  18. Association of medication errors with drug classifications, clinical units, and consequence of errors: Are they related?

    Science.gov (United States)

    Muroi, Maki; Shen, Jay J; Angosta, Alona

    2017-02-01

    Registered nurses (RNs) play an important role in safe medication administration and patient safety. This study examined a total of 1276 medication error (ME) incident reports made by RNs in hospital inpatient settings in the southwestern region of the United States. The most common drug class associated with MEs was cardiovascular drugs (24.7%). Among this class, anticoagulants had the most errors (11.3%). The antimicrobials was the second most common drug class associated with errors (19.1%) and vancomycin was the most common antimicrobial that caused errors in this category (6.1%). MEs occurred more frequently in the medical-surgical and intensive care units than any other hospital units. Ten percent of MEs reached the patients with harm and 11% reached the patients with increased monitoring. Understanding the contributing factors related to MEs, addressing and eliminating risk of errors across hospital units, and providing education and resources for nurses may help reduce MEs. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. IPTV multicast with peer-assisted lossy error control

    Science.gov (United States)

    Li, Zhi; Zhu, Xiaoqing; Begen, Ali C.; Girod, Bernd

    2010-07-01

    Emerging IPTV technology uses source-specific IP multicast to deliver television programs to end-users. To provide reliable IPTV services over the error-prone DSL access networks, a combination of multicast forward error correction (FEC) and unicast retransmissions is employed to mitigate the impulse noises in DSL links. In existing systems, the retransmission function is provided by the Retransmission Servers sitting at the edge of the core network. In this work, we propose an alternative distributed solution where the burden of packet loss repair is partially shifted to the peer IP set-top boxes. Through Peer-Assisted Repair (PAR) protocol, we demonstrate how the packet repairs can be delivered in a timely, reliable and decentralized manner using the combination of server-peer coordination and redundancy of repairs. We also show that this distributed protocol can be seamlessly integrated with an application-layer source-aware error protection mechanism called forward and retransmitted Systematic Lossy Error Protection (SLEP/SLEPr). Simulations show that this joint PARSLEP/ SLEPr framework not only effectively mitigates the bottleneck experienced by the Retransmission Servers, thus greatly enhancing the scalability of the system, but also efficiently improves the resistance to the impulse noise.

  20. The VTTVIS line imaging spectrometer - principles, error sources, and calibration

    DEFF Research Database (Denmark)

    Jørgensen, R.N.

    2002-01-01

    work describing the basic principles, potential error sources, and/or adjustment and calibration procedures. This report fulfils the need for such documentationwith special focus on the system at KVL. The PGP based system has several severe error sources, which should be removed prior any analysis......Hyperspectral imaging with a spatial resolution of a few mm2 has proved to have a great potential within crop and weed classification and also within nutrient diagnostics. A commonly used hyperspectral imaging system is based on the Prism-Grating-Prism(PGP) principles produced by Specim Ltd...... in off-axis transmission efficiencies, diffractionefficiencies, and image distortion have a significant impact on the instrument performance. Procedures removing or minimising these systematic error sources are developed and described for the system build at KVL but can be generalised to other PGP...

  1. Thermal error analysis and compensation for digital image/volume correlation

    Science.gov (United States)

    Pan, Bing

    2018-02-01

    Digital image/volume correlation (DIC/DVC) rely on the digital images acquired by digital cameras and x-ray CT scanners to extract the motion and deformation of test samples. Regrettably, these imaging devices are unstable optical systems, whose imaging geometry may undergo unavoidable slight and continual changes due to self-heating effect or ambient temperature variations. Changes in imaging geometry lead to both shift and expansion in the recorded 2D or 3D images, and finally manifest as systematic displacement and strain errors in DIC/DVC measurements. Since measurement accuracy is always the most important requirement in various experimental mechanics applications, these thermal-induced errors (referred to as thermal errors) should be given serious consideration in order to achieve high accuracy, reproducible DIC/DVC measurements. In this work, theoretical analyses are first given to understand the origin of thermal errors. Then real experiments are conducted to quantify thermal errors. Three solutions are suggested to mitigate or correct thermal errors. Among these solutions, a reference sample compensation approach is highly recommended because of its easy implementation, high accuracy and in-situ error correction capability. Most of the work has appeared in our previously published papers, thus its originality is not claimed. Instead, this paper aims to give a comprehensive overview and more insights of our work on thermal error analysis and compensation for DIC/DVC measurements.

  2. Monte-Carlo error analysis in x-ray spectral deconvolution

    International Nuclear Information System (INIS)

    Shirk, D.G.; Hoffman, N.M.

    1985-01-01

    The deconvolution of spectral information from sparse x-ray data is a widely encountered problem in data analysis. An often-neglected aspect of this problem is the propagation of random error in the deconvolution process. We have developed a Monte-Carlo approach that enables us to attach error bars to unfolded x-ray spectra. Our Monte-Carlo error analysis has been incorporated into two specific deconvolution techniques: the first is an iterative convergent weight method; the second is a singular-value-decomposition (SVD) method. These two methods were applied to an x-ray spectral deconvolution problem having m channels of observations with n points in energy space. When m is less than n, this problem has no unique solution. We discuss the systematics of nonunique solutions and energy-dependent error bars for both methods. The Monte-Carlo approach has a particular benefit in relation to the SVD method: It allows us to apply the constraint of spectral nonnegativity after the SVD deconvolution rather than before. Consequently, we can identify inconsistencies between different detector channels

  3. Analysis of error patterns in clinical radiotherapy

    International Nuclear Information System (INIS)

    Macklis, Roger; Meier, Tim; Barrett, Patricia; Weinhous, Martin

    1996-01-01

    Purpose: Until very recently, prescription errors and adverse treatment events have rarely been studied or reported systematically in oncology. We wished to understand the spectrum and severity of radiotherapy errors that take place on a day-to-day basis in a high-volume academic practice and to understand the resource needs and quality assurance challenges placed on a department by rapid upswings in contract-based clinical volumes requiring additional operating hours, procedures, and personnel. The goal was to define clinical benchmarks for operating safety and to detect error-prone treatment processes that might function as 'early warning' signs. Methods: A multi-tiered prospective and retrospective system for clinical error detection and classification was developed, with formal analysis of the antecedents and consequences of all deviations from prescribed treatment delivery, no matter how trivial. A department-wide record-and-verify system was operational during this period and was used as one method of treatment verification and error detection. Brachytherapy discrepancies were analyzed separately. Results: During the analysis year, over 2000 patients were treated with over 93,000 individual fields. A total of 59 errors affecting a total of 170 individual treated fields were reported or detected during this period. After review, all of these errors were classified as Level 1 (minor discrepancy with essentially no potential for negative clinical implications). This total treatment delivery error rate (170/93, 332 or 0.18%) is significantly better than corresponding error rates reported for other hospital and oncology treatment services, perhaps reflecting the relatively sophisticated error avoidance and detection procedures used in modern clinical radiation oncology. Error rates were independent of linac model and manufacturer, time of day (normal operating hours versus late evening or early morning) or clinical machine volumes. There was some relationship to

  4. Incidence and Severity of Prescribing Errors in Parenteral Nutrition for Pediatric Inpatients at a Neonatal and Pediatric Intensive Care Unit

    Directory of Open Access Journals (Sweden)

    Theresa Hermanspann

    2017-06-01

    Full Text Available ObjectivesPediatric inpatients are particularly vulnerable to medication errors (MEs, especially in highly individualized preparations like parenteral nutrition (PN. Aside from prescribing via a computerized physician order entry system (CPOE, we evaluated the effect of cross-checking by a clinical pharmacist to prevent harm from PN order errors in a neonatal and pediatric intensive care unit (NICU/PICU.MethodsThe incidence of prescribing errors in PN in a tertiary level NICU/PICU was surveyed prospectively between March 2012 and July 2013 (n = 3,012 orders. A pharmacist cross-checked all PN orders prior to preparation. Errors were assigned to seven different error-type categories. Three independent experts from different academic tertiary level NICUs judged the severity of each error according to the National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP Index (categories A–I.ResultsThe error rate was 3.9% for all 3,012 orders (118 prescribing errors in 111 orders. 77 (6.0%, 1,277 orders errors occurred in the category concentration range, all concerning a relative overdose of calcium gluconate for peripheral infusion. The majority of all events (60% were assigned to categories C and D (without major harmful consequences while 28% could not be assigned due to missing majority decision. Potential harmful consequences requiring interventions (category E could have occurred in 12% of assessments.ConclusionNext to systematic application of clinical guidelines and prescribing via CPOE, order review by a clinical pharmacist is still required to effectively reduce MEs and thus to prevent minor and major adverse drug events with the aim to enhance medication safety.

  5. Error modeling for surrogates of dynamical systems using machine learning: Machine-learning-based error model for surrogates of dynamical systems

    International Nuclear Information System (INIS)

    Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.

    2017-01-01

    A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed by simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We then apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. Moreover, when the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well

  6. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  7. Learning (from) the errors of a systems biology model.

    Science.gov (United States)

    Engelhardt, Benjamin; Frőhlich, Holger; Kschischo, Maik

    2016-02-11

    Mathematical modelling is a labour intensive process involving several iterations of testing on real data and manual model modifications. In biology, the domain knowledge guiding model development is in many cases itself incomplete and uncertain. A major problem in this context is that biological systems are open. Missed or unknown external influences as well as erroneous interactions in the model could thus lead to severely misleading results. Here we introduce the dynamic elastic-net, a data driven mathematical method which automatically detects such model errors in ordinary differential equation (ODE) models. We demonstrate for real and simulated data, how the dynamic elastic-net approach can be used to automatically (i) reconstruct the error signal, (ii) identify the target variables of model error, and (iii) reconstruct the true system state even for incomplete or preliminary models. Our work provides a systematic computational method facilitating modelling of open biological systems under uncertain knowledge.

  8. A systematic review of patient medication error on self-administering medication at home.

    Science.gov (United States)

    Mira, José Joaquín; Lorenzo, Susana; Guilabert, Mercedes; Navarro, Isabel; Pérez-Jover, Virtudes

    2015-06-01

    Medication errors have been analyzed as a health professionals' responsibility (due to mistakes in prescription, preparation or dispensing). However, sometimes, patients themselves (or their caregivers) make mistakes in the administration of the medication. The epidemiology of patient medication errors (PEs) has been scarcely reviewed in spite of its impact on people, on therapeutic effectiveness and on incremental cost for the health systems. This study reviews and describes the methodological approaches and results of published studies on the frequency, causes and consequences of medication errors committed by patients at home. A review of research articles published between 1990 and 2014 was carried out using MEDLINE, Web-of-Knowledge, Scopus, Tripdatabase and Index Medicus. The frequency of PE was situated between 19 and 59%. The elderly and the preschooler population constituted a higher number of mistakes than others. The most common were: incorrect dosage, forgetting, mixing up medications, failing to recall indications and taking out-of-date or inappropriately stored drugs. The majority of these mistakes have no negative consequences. Health literacy, information and communication and complexity of use of dispensing devices were identified as causes of PEs. Apps and other new technologies offer several opportunities for improving drug safety.

  9. Reducing Bias and Error in the Correlation Coefficient Due to Nonnormality

    Science.gov (United States)

    Bishara, Anthony J.; Hittner, James B.

    2015-01-01

    It is more common for educational and psychological data to be nonnormal than to be approximately normal. This tendency may lead to bias and error in point estimates of the Pearson correlation coefficient. In a series of Monte Carlo simulations, the Pearson correlation was examined under conditions of normal and nonnormal data, and it was compared…

  10. CALIBRATION ERRORS IN THE CAVITY BEAM POSITION MONITOR SYSTEM AT THE ATF2

    CERN Document Server

    Cullinan, F; Joshi, N; Lyapin, A

    2011-01-01

    It has been shown at the Accelerator Test Facility at KEK, that it is possible to run a system of 37 cavity beam position monitors (BPMs) and achieve high working resolution. However, stability of the calibration constants (position scale and radio frequency (RF) phase) over a three/four week running period is yet to be demonstrated. During the calibration procedure, random beam jitter gives rise to a statistical error in the position scale and slow orbit drift in position and tilt causes systematic errors in both the position scale and RF phase. These errors are dominant and have been evaluated for each BPM. The results are compared with the errors expected after a tested method of beam jitter subtraction has been applied.

  11. Systematic Review of Interventions to Reduce Urinary Tract Infection in Nursing Home Residents

    Science.gov (United States)

    Meddings, Jennifer; Saint, Sanjay; Krein, Sarah L.; Gaies, Elissa; Reichert, Heidi; Hickner, Andrew; McNamara, Sara; Mann, Jason D.; Mody, Lona

    2017-01-01

    BACKGROUND Urinary tract infections (UTIs) in nursing homes are common, costly, and morbid. PURPOSE Systematic literature review of strategies to reduce UTIs in nursing home residents DATA SOURCES Ovid MEDLINE, Cochrane Library, CINAHL, Web of Science and Embase through June 22, 2015. STUDY SELECTION Interventional studies with a comparison group reporting at least one outcome for: catheter-associated UTI (CAUTI), UTIs not identified as catheter-associated, bacteriuria, or urinary catheter use. DATA EXTRACTION Two authors abstracted study design, participant and intervention details, outcomes, and quality measures. DATA SYNTHESIS Of 5,794 records retrieved, 20 records describing 19 interventions were included: 8 randomized controlled trials, 10 pre-post non-randomized interventions, and 1 non-randomized intervention with concurrent controls. Quality (range 8-25, median 15) and outcome definitions varied greatly. Thirteen studies employed strategies to reduce catheter use or improve catheter care; nine studies employed general infection prevention strategies (e.g., improving hand hygiene, surveillance, contact precautions, reducing antibiotics). The nineteen studies reported 12 UTI outcomes, 9 CAUTI outcomes, 4 bacteriuria outcomes, and 5 catheter use outcomes. Five studies showed CAUTI reduction (1 significantly); nine studies showed UTI reduction (none significantly); 2 studies showed bacteriuria reduction (none significantly). Four studies showed reduced catheter use (1 significantly). LIMITATIONS Studies were often underpowered to assess statistical significance; none were pooled given variety of interventions and outcomes. CONCLUSIONS Several practices, often implemented in bundles, appear to reduce UTI or CAUTI in nursing home residents such as improving hand hygiene, reducing and improving catheter use, managing incontinence without catheters, and enhanced barrier precautions. PMID:28459908

  12. A randomised trial of Supine versus Prone breast radiotherapy (SuPr study): Comparing set-up errors and respiratory motion

    International Nuclear Information System (INIS)

    Kirby, Anna M.; Evans, Philip M.; Helyer, Sarah J.; Donovan, Ellen M.; Convery, Helen M.; Yarnold, John R.

    2011-01-01

    Purpose: To test a prone position against the international-standard supine position in women undergoing whole-breast-radiotherapy (WBRT) after wide-local-excision (WLE) of early breast cancer (BC) in terms of feasibility, set-up errors, and respiratory motion. Methods: Following WLE of BC with insertion of tumour-bed clips, patients underwent 4D-CT for WBRT-planning in supine and prone positions (the latter using an in-house-designed platform). Patients were randomised to undergo WBRT fractions 1-7 in one position, switching to the alternate position for fractions 8-15 (40 Gy/15-fractions total). Cone-beam CT-images (CBCT) were acquired prior to fractions 1, 4, 7, 8, 11 and 14. CBCT data were matched to planning-CT data using (i) chest-wall and (ii) clips. Systematic and random errors were calculated. Maximal displacement of chest-wall and clips with respiration was measured on 4D-CT. Clinical- to planning-target-volume (CTV-PTV) margins were calculated. Patient-comfort-scores and treatment-times were evaluated. Results: Twenty-five patients were randomized. 192/192 (100%) planned supine fractions and 173/192 (90%) prone fractions were completed. 3D population systematic errors were 1.3-1.9 mm (supine) and 3.1-4.3 mm (prone) (p = 0.02) and random errors 2.6-3.2 mm (supine) and 3.8-5.4 mm (prone) (p = 0.02). Prone positioning reduced chest-wall and clip motion (0.5 ± 0.2 mm (prone) versus 2.7 ± 0.5 mm (supine) (p < 0.001)) with respiration. Calculated CTV-PTV margins were greater for prone (12-16 mm) than for supine treatment (10 mm). Patient-comfort-scores and treatment times were comparable (p = 0.06). Conclusions: Set-up errors were greater using our prone technique than for our standard supine technique, resulting in the need for larger CTV-PTV margins in the prone position. Further work is required to optimize the prone treatment-platform and technique before it can become a standard treatment option at our institution.

  13. Visual correlation analytics of event-based error reports for advanced manufacturing

    OpenAIRE

    Nazir, Iqbal

    2017-01-01

    With the growing digitalization and automation in the manufacturing domain, an increasing amount of process data and error reports become available. To minimize the number of errors and maximize the efficiency of the production line, it is important to analyze the generated error reports and find solutions that can reduce future errors. However, not all errors have the equal importance, as some errors may be the result of previously occurred errors. Therefore, it is important for domain exper...

  14. Probabilistic performance estimators for computational chemistry methods: The empirical cumulative distribution function of absolute errors

    Science.gov (United States)

    Pernot, Pascal; Savin, Andreas

    2018-06-01

    Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.

  15. Analyzing Software Requirements Errors in Safety-Critical, Embedded Systems

    Science.gov (United States)

    Lutz, Robyn R.

    1993-01-01

    This paper analyzes the root causes of safety-related software errors in safety-critical, embedded systems. The results show that software errors identified as potentially hazardous to the system tend to be produced by different error mechanisms than non- safety-related software errors. Safety-related software errors are shown to arise most commonly from (1) discrepancies between the documented requirements specifications and the requirements needed for correct functioning of the system and (2) misunderstandings of the software's interface with the rest of the system. The paper uses these results to identify methods by which requirements errors can be prevented. The goal is to reduce safety-related software errors and to enhance the safety of complex, embedded systems.

  16. Servo control booster system for minimizing following error

    Science.gov (United States)

    Wise, W.L.

    1979-07-26

    A closed-loop feedback-controlled servo system is disclosed which reduces command-to-response error to the system's position feedback resolution least increment, ..delta..S/sub R/, on a continuous real-time basis, for all operational times of consequence and for all operating speeds. The servo system employs a second position feedback control loop on a by exception basis, when the command-to-response error greater than or equal to ..delta..S/sub R/, to produce precise position correction signals. When the command-to-response error is less than ..delta..S/sub R/, control automatically reverts to conventional control means as the second position feedback control loop is disconnected, becoming transparent to conventional servo control means. By operating the second unique position feedback control loop used herein at the appropriate clocking rate, command-to-response error may be reduced to the position feedback resolution least increment. The present system may be utilized in combination with a tachometer loop for increased stability.

  17. Analysis of the interface tracking errors

    International Nuclear Information System (INIS)

    Cerne, G.; Tiselj, I.; Petelin, S.

    2001-01-01

    An important limitation of the interface-tracking algorithm is the grid density, which determines the space scale of the surface tracking. In this paper the analysis of the interface tracking errors, which occur in a dispersed flow, is performed for the VOF interface tracking method. A few simple two-fluid tests are proposed for the investigation of the interface tracking errors and their grid dependence. When the grid density becomes too coarse to follow the interface changes, the errors can be reduced either by using denser nodalization or by switching to the two-fluid model during the simulation. Both solutions are analyzed and compared on a simple vortex-flow test.(author)

  18. KMRR thermal power measurement error estimation

    International Nuclear Information System (INIS)

    Rhee, B.W.; Sim, B.S.; Lim, I.C.; Oh, S.K.

    1990-01-01

    The thermal power measurement error of the Korea Multi-purpose Research Reactor has been estimated by a statistical Monte Carlo method, and compared with those obtained by the other methods including deterministic and statistical approaches. The results show that the specified thermal power measurement error of 5% cannot be achieved if the commercial RTDs are used to measure the coolant temperatures of the secondary cooling system and the error can be reduced below the requirement if the commercial RTDs are replaced by the precision RTDs. The possible range of the thermal power control operation has been identified to be from 100% to 20% of full power

  19. Optics measurement algorithms and error analysis for the proton energy frontier

    CERN Document Server

    Langner, A

    2015-01-01

    Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV) was insufficient to understand beam size measurements and determine interaction point (IP) β-functions (β). A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased...

  20. Multi-qubit compensation sequences

    International Nuclear Information System (INIS)

    Tomita, Y; Merrill, J T; Brown, K R

    2010-01-01

    The Hamiltonian control of n qubits requires precision control of both the strength and timing of interactions. Compensation pulses relax the precision requirements by reducing unknown but systematic errors. Using composite pulse techniques designed for single qubits, we show that systematic errors for n-qubit systems can be corrected to arbitrary accuracy given either two non-commuting control Hamiltonians with identical systematic errors or one error-free control Hamiltonian. We also examine composite pulses in the context of quantum computers controlled by two-qubit interactions. For quantum computers based on the XY interaction, single-qubit composite pulse sequences naturally correct systematic errors. For quantum computers based on the Heisenberg or exchange interaction, the composite pulse sequences reduce the logical single-qubit gate errors but increase the errors for logical two-qubit gates.

  1. Systematic review of stigma reducing interventions for African/Black diasporic women.

    Science.gov (United States)

    Loutfy, Mona; Tharao, Wangari; Logie, Carmen; Aden, Muna A; Chambers, Lori A; Wu, Wei; Abdelmaseh, Marym; Calzavara, Liviana

    2015-01-01

    Literature indicates that racism, sexism, homophobia and HIV-related stigma have adverse impacts on health, well-being, and quality of life among HIV-positive women of African descent (African/Black diaspora). However, limited evidence exists on the effectiveness of interventions aimed at reducing stigma tailored for these women. This study systematically reviewed randomized controlled trials (RCTs), non-randomized observational and quasi-experimental studies evaluating the effectiveness of interventions aimed at reducing stigma experienced by this population. The Cochrane methodology was used to develop a search strategy in consultation with a librarian scientist. Databases searched included the Cochrane Library, Ovid EMBASE, PsycInfo, and 10 others. Two reviewers independently assessed the studies for potential relevance and conducted the Cochrane grading of RCTs to assess risk of bias and the Newcastle-Ottawa scale to assess the quality of non-randomized studies. Eligible papers were selected if they employed an intervention design with African/Black diasporic women living with HIV as the target population and had a primary outcome of stigma reduction. Of the five studies that met all of the eligibility criteria, four demonstrated the effectiveness of interventions in reducing HIV-related stigma. Only two of the five studies were designed specifically for HIV-positive African/Black diasporic women. Limitations included the absence of interventions addressing other forms of stigma and discrimination (e.g. gender discrimination, racism, heterosexism). Our findings suggest that there are limited interventions designed to address multiple forms of stigma, including gender and racial discrimination, experienced by HIV-positive African/Black diasporic women.

  2. ERROR VS REJECTION CURVE FOR THE PERCEPTRON

    OpenAIRE

    PARRONDO, JMR; VAN DEN BROECK, Christian

    1993-01-01

    We calculate the generalization error epsilon for a perceptron J, trained by a teacher perceptron T, on input patterns S that form a fixed angle arccos (J.S) with the student. We show that the error is reduced from a power law to an exponentially fast decay by rejecting input patterns that lie within a given neighbourhood of the decision boundary J.S = 0. On the other hand, the error vs. rejection curve epsilon(rho), where rho is the fraction of rejected patterns, is shown to be independent ...

  3. In-hospital fellow coverage reduces communication errors in the surgical intensive care unit.

    Science.gov (United States)

    Williams, Mallory; Alban, Rodrigo F; Hardy, James P; Oxman, David A; Garcia, Edward R; Hevelone, Nathanael; Frendl, Gyorgy; Rogers, Selwyn O

    2014-06-01

    Staff coverage strategies of intensive care units (ICUs) impact clinical outcomes. High-intensity staff coverage strategies are associated with lower morbidity and mortality. Accessible clinical expertise, team work, and effective communication have all been attributed to the success of this coverage strategy. We evaluate the impact of in-hospital fellow coverage (IHFC) on improving communication of cardiorespiratory events. A prospective observational study performed in an academic tertiary care center with high-intensity staff coverage. The main outcome measure was resident to fellow communication of cardiorespiratory events during IHFC vs home coverage (HC) periods. Three hundred twelve cardiorespiratory events were collected in 114 surgical ICU patients in 134 study days. Complete data were available for 306 events. One hundred three communication errors occurred. IHFC was associated with significantly better communication of events compared to HC (Pcommunicated 89% of events during IHFC vs 51% of events during HC (PCommunication patterns of junior and midlevel residents were similar. Midlevel residents communicated 68% of all on-call events (87% IHFC vs 50% HC, Pcommunicated 66% of events (94% IHFC vs 52% HC, PCommunication errors were lower in all ICUs during IHFC (Pcommunication errors. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. SYSTEMATIC UNCERTAINTIES IN BLACK HOLE MASSES DETERMINED FROM SINGLE-EPOCH SPECTRA

    International Nuclear Information System (INIS)

    Denney, Kelly D.; Peterson, Bradley M.; Dietrich, Matthias; Bentz, Misty C.; Vestergaard, Marianne

    2009-01-01

    We explore the nature of systematic errors that can arise in measurement of black hole masses from single-epoch (SE) spectra of active galactic nuclei (AGNs) by utilizing the many epochs available for NGC 5548 and PG1229+204 from reverberation mapping (RM) databases. In particular, we examine systematics due to AGN variability, contamination due to constant spectral components (i.e., narrow lines and host galaxy flux), data quality (i.e., signal-to-noise ratio (S/N)), and blending of spectral features. We investigate the effect that each of these systematics has on the precision and accuracy of SE masses calculated from two commonly used line width measures by comparing these results to recent RM studies. We calculate masses by characterizing the broad Hβ emission line by both the full width at half maximum and the line dispersion, and demonstrate the importance of removing narrow emission-line components and host starlight. We find that the reliability of line width measurements rapidly decreases for S/N lower than ∼ 10-20 (per pixel), and that fitting the line profiles instead of direct measurement of the data does not mitigate this problem but can, in fact, introduce systematic errors. We also conclude that a full spectral decomposition to deblend the AGN and galaxy spectral features is unnecessary, except to judge the contribution of the host galaxy to the luminosity and to deblend any emission lines that may inhibit accurate line width measurements. Finally, we present an error budget which summarizes the minimum observable uncertainties as well as the amount of additional scatter and/or systematic offset that can be expected from the individual sources of error investigated. In particular, we find that the minimum observable uncertainty in SE mass estimates due to variability is ∼ 20 pixel -1 ) spectra.

  5. Interactive analysis of human error factors in NPP operation events

    International Nuclear Information System (INIS)

    Zhang Li; Zou Yanhua; Huang Weigang

    2010-01-01

    Interactive of human error factors in NPP operation events were introduced, and 645 WANO operation event reports from 1999 to 2008 were analyzed, among which 432 were found relative to human errors. After classifying these errors with the Root Causes or Causal Factors, and then applying SPSS for correlation analysis,we concluded: (1) Personnel work practices are restricted by many factors. Forming a good personnel work practices is a systematic work which need supports in many aspects. (2)Verbal communications,personnel work practices, man-machine interface and written procedures and documents play great roles. They are four interaction factors which often come in bundle. If some improvements need to be made on one of them,synchronous measures are also necessary for the others.(3) Management direction and decision process, which are related to management,have a significant interaction with personnel factors. (authors)

  6. Nursing Errors in Intensive Care Unit by Human Error Identification in Systems Tool: A Case Study

    Directory of Open Access Journals (Sweden)

    Nezamodini

    2016-03-01

    Full Text Available Background Although health services are designed and implemented to improve human health, the errors in health services are a very common phenomenon and even sometimes fatal in this field. Medical errors and their cost are global issues with serious consequences for the patients’ community that are preventable and require serious attention. Objectives The current study aimed to identify possible nursing errors applying human error identification in systems tool (HEIST in the intensive care units (ICUs of hospitals. Patients and Methods This descriptive research was conducted in the intensive care unit of a hospital in Khuzestan province in 2013. Data were collected through observation and interview by nine nurses in this section in a period of four months. Human error classification was based on Rose and Rose and Swain and Guttmann models. According to HEIST work sheets the guide questions were answered and error causes were identified after the determination of the type of errors. Results In total 527 errors were detected. The performing operation on the wrong path had the highest frequency which was 150, and the second rate with a frequency of 136 was doing the tasks later than the deadline. Management causes with a frequency of 451 were the first rank among identified errors. Errors mostly occurred in the system observation stage and among the performance shaping factors (PSFs, time was the most influencing factor in occurrence of human errors. Conclusions Finally, in order to prevent the occurrence and reduce the consequences of identified errors the following suggestions were proposed : appropriate training courses, applying work guidelines and monitoring their implementation, increasing the number of work shifts, hiring professional workforce, equipping work space with appropriate facilities and equipment.

  7. The effectiveness of virtual reality on reducing pain and anxiety in burn injury patients: a systematic review.

    Science.gov (United States)

    Morris, Linzette Deidré; Louw, Quinette Abegail; Grimmer-Somers, Karen

    2009-01-01

    To systematically review the current evidence for the effectiveness of Virtual Reality (VR), in conjunction with pharmacologic analgesia on reducing pain and anxiety in burn injury patients undergoing wound dressing changes and physiotherapy management compared with pharmacologic analgesia alone or other forms of distraction. A comprehensive search was conducted between December 2007 and January 2008, and updated in January 2009, before publication. Computerized bibliographic databases were individually searched using specifically developed search strategies to identify eligible studies. Nine studies were deemed eligible for inclusion in this review. Wound dressing changes was the most common procedure during which VR was trialed. Pain was the primary outcome measure in all of the studies included. Anxiety was a secondary outcome measure in 3 of the 9 included studies. VR, in conjunction with pharmacologic analgesics, significantly reduced pain experienced by burn injury patients during wound dressing changes and physiotherapy. There is equivocal evidence for the effect of VR in conjunction with pharmacologic analgesics on reducing anxiety in burn injury patients during wound dressing changes and physiotherapy. This is the first known systematic review to report on the effectiveness of VR, in conjunction with pharmacologic analgesia on reducing pain and anxiety in burn injury patients undergoing wound dressing changes and physiotherapy management compared with pharmacologic analgesia alone or other forms of distraction. Used as an adjunct to the current burn pain management regimens, VR could possibly assist health professionals in making the rehabilitation process for burn patients less excruciating, thereby improving functional outcomes. Further research investigating the effect of VR on anxiety in burn injury patients is warranted.

  8. On low-frequency errors of uniformly modulated filtered white-noise models for ground motions

    Science.gov (United States)

    Safak, Erdal; Boore, David M.

    1988-01-01

    Low-frequency errors of a commonly used non-stationary stochastic model (uniformly modulated filtered white-noise model) for earthquake ground motions are investigated. It is shown both analytically and by numerical simulation that uniformly modulated filter white-noise-type models systematically overestimate the spectral response for periods longer than the effective duration of the earthquake, because of the built-in low-frequency errors in the model. The errors, which are significant for low-magnitude short-duration earthquakes, can be eliminated by using the filtered shot-noise-type models (i. e. white noise, modulated by the envelope first, and then filtered).

  9. A Comprehensive Radial Velocity Error Budget for Next Generation Doppler Spectrometers

    Science.gov (United States)

    Halverson, Samuel; Ryan, Terrien; Mahadevan, Suvrath; Roy, Arpita; Bender, Chad; Stefansson, Guomundur Kari; Monson, Andrew; Levi, Eric; Hearty, Fred; Blake, Cullen; hide

    2016-01-01

    We describe a detailed radial velocity error budget for the NASA-NSF Extreme Precision Doppler Spectrometer instrument concept NEID (NN-explore Exoplanet Investigations with Doppler spectroscopy). Such an instrument performance budget is a necessity for both identifying the variety of noise sources currently limiting Doppler measurements, and estimating the achievable performance of next generation exoplanet hunting Doppler spectrometers. For these instruments, no single source of instrumental error is expected to set the overall measurement floor. Rather, the overall instrumental measurement precision is set by the contribution of many individual error sources. We use a combination of numerical simulations, educated estimates based on published materials, extrapolations of physical models, results from laboratory measurements of spectroscopic subsystems, and informed upper limits for a variety of error sources to identify likely sources of systematic error and construct our global instrument performance error budget. While natively focused on the performance of the NEID instrument, this modular performance budget is immediately adaptable to a number of current and future instruments. Such an approach is an important step in charting a path towards improving Doppler measurement precisions to the levels necessary for discovering Earth-like planets.

  10. A NEW METHOD TO QUANTIFY AND REDUCE THE NET PROJECTION ERROR IN WHOLE-SOLAR-ACTIVE-REGION PARAMETERS MEASURED FROM VECTOR MAGNETOGRAMS

    Energy Technology Data Exchange (ETDEWEB)

    Falconer, David A.; Tiwari, Sanjiv K.; Moore, Ronald L. [NASA Marshall Space Flight Center, Huntsville, AL 35812 (United States); Khazanov, Igor, E-mail: David.a.Falconer@nasa.gov [Center for Space Plasma and Aeronomic Research, University of Alabama in Huntsville, Huntsville, AL 35899 (United States)

    2016-12-20

    Projection errors limit the use of vector magnetograms of active regions (ARs) far from the disk center. In this Letter, for ARs observed up to 60° from the disk center, we demonstrate a method for measuring and reducing the projection error in the magnitude of any whole-AR parameter that is derived from a vector magnetogram that has been deprojected to the disk center. The method assumes that the center-to-limb curve of the average of the parameter’s absolute values, measured from the disk passage of a large number of ARs and normalized to each AR’s absolute value of the parameter at central meridian, gives the average fractional projection error at each radial distance from the disk center. To demonstrate the method, we use a large set of large-flux ARs and apply the method to a whole-AR parameter that is among the simplest to measure: whole-AR magnetic flux. We measure 30,845 SDO /Helioseismic and Magnetic Imager vector magnetograms covering the disk passage of 272 large-flux ARs, each having whole-AR flux >10{sup 22} Mx. We obtain the center-to-limb radial-distance run of the average projection error in measured whole-AR flux from a Chebyshev fit to the radial-distance plot of the 30,845 normalized measured values. The average projection error in the measured whole-AR flux of an AR at a given radial distance is removed by multiplying the measured flux by the correction factor given by the fit. The correction is important for both the study of the evolution of ARs and for improving the accuracy of forecasts of an AR’s major flare/coronal mass ejection productivity.

  11. Error analysis and prevention of cosmic ion-induced soft errors in static CMOS RAMS

    International Nuclear Information System (INIS)

    Diehl, S.E.; Ochoa, A. Jr.; Dressendorfer, P.V.; Koga, R.; Kolasinski, W.A.

    1982-06-01

    Cosmic ray interactions with memory cells are known to cause temporary, random, bit errors in some designs. The sensitivity of polysilicon gate CMOS static RAM designs to logic upset by impinging ions has been studied using computer simulations and experimental heavy ion bombardment. Results of the simulations are confirmed by experimental upset cross-section data. Analytical models have been extended to determine and evaluate design modifications which reduce memory cell sensitivity to cosmic ions. A simple design modification, the addition of decoupling resistance in the feedback path, is shown to produce static RAMs immune to cosmic ray-induced bit errors

  12. PS-022 Complex automated medication systems reduce medication administration error rates in an acute medical ward

    DEFF Research Database (Denmark)

    Risør, Bettina Wulff; Lisby, Marianne; Sørensen, Jan

    2017-01-01

    Background Medication errors have received extensive attention in recent decades and are of significant concern to healthcare organisations globally. Medication errors occur frequently, and adverse events associated with medications are one of the largest causes of harm to hospitalised patients...... cabinet, automated dispensing and barcode medication administration; (2) non-patient specific automated dispensing and barcode medication administration. The occurrence of administration errors was observed in three 3 week periods. The error rates were calculated by dividing the number of doses with one...

  13. Accurate and systematically improvable density functional theory embedding for correlated wavefunctions

    International Nuclear Information System (INIS)

    Goodpaster, Jason D.; Barnes, Taylor A.; Miller, Thomas F.; Manby, Frederick R.

    2014-01-01

    We analyze the sources of error in quantum embedding calculations in which an active subsystem is treated using wavefunction methods, and the remainder using density functional theory. We show that the embedding potential felt by the electrons in the active subsystem makes only a small contribution to the error of the method, whereas the error in the nonadditive exchange-correlation energy dominates. We test an MP2 correction for this term and demonstrate that the corrected embedding scheme accurately reproduces wavefunction calculations for a series of chemical reactions. Our projector-based embedding method uses localized occupied orbitals to partition the system; as with other local correlation methods, abrupt changes in the character of the localized orbitals along a reaction coordinate can lead to discontinuities in the embedded energy, but we show that these discontinuities are small and can be systematically reduced by increasing the size of the active region. Convergence of reaction energies with respect to the size of the active subsystem is shown to be rapid for all cases where the density functional treatment is able to capture the polarization of the environment, even in conjugated systems, and even when the partition cuts across a double bond

  14. Systematic review of ERP and fMRI studies investigating inhibitory control and error processing in people with substance dependence and behavioural addictions

    Science.gov (United States)

    Luijten, Maartje; Machielsen, Marise W.J.; Veltman, Dick J.; Hester, Robert; de Haan, Lieuwe; Franken, Ingmar H.A.

    2014-01-01

    Background Several current theories emphasize the role of cognitive control in addiction. The present review evaluates neural deficits in the domains of inhibitory control and error processing in individuals with substance dependence and in those showing excessive addiction-like behaviours. The combined evaluation of event-related potential (ERP) and functional magnetic resonance imaging (fMRI) findings in the present review offers unique information on neural deficits in addicted individuals. Methods We selected 19 ERP and 22 fMRI studies using stop-signal, go/no-go or Flanker paradigms based on a search of PubMed and Embase. Results The most consistent findings in addicted individuals relative to healthy controls were lower N2, error-related negativity and error positivity amplitudes as well as hypoactivation in the anterior cingulate cortex (ACC), inferior frontal gyrus and dorsolateral prefrontal cortex. These neural deficits, however, were not always associated with impaired task performance. With regard to behavioural addictions, some evidence has been found for similar neural deficits; however, studies are scarce and results are not yet conclusive. Differences among the major classes of substances of abuse were identified and involve stronger neural responses to errors in individuals with alcohol dependence versus weaker neural responses to errors in other substance-dependent populations. Limitations Task design and analysis techniques vary across studies, thereby reducing comparability among studies and the potential of clinical use of these measures. Conclusion Current addiction theories were supported by identifying consistent abnormalities in prefrontal brain function in individuals with addiction. An integrative model is proposed, suggesting that neural deficits in the dorsal ACC may constitute a hallmark neurocognitive deficit underlying addictive behaviours, such as loss of control. PMID:24359877

  15. Error Control for Network-on-Chip Links

    CERN Document Server

    Fu, Bo

    2012-01-01

    As technology scales into nanoscale regime, it is impossible to guarantee the perfect hardware design. Moreover, if the requirement of 100% correctness in hardware can be relaxed, the cost of manufacturing, verification, and testing will be significantly reduced. Many approaches have been proposed to address the reliability problem of on-chip communications. This book focuses on the use of error control codes (ECCs) to improve on-chip interconnect reliability. Coverage includes detailed description of key issues in NOC error control faced by circuit and system designers, as well as practical error control techniques to minimize the impact of these errors on system performance. Provides a detailed background on the state of error control methods for on-chip interconnects; Describes the use of more complex concatenated codes such as Hamming Product Codes with Type-II HARQ, while emphasizing integration techniques for on-chip interconnect links; Examines energy-efficient techniques for integrating multiple error...

  16. Good ergonomics and team diversity reduce absenteeism and errors in car manufacturing.

    Science.gov (United States)

    Fritzsche, Lars; Wegge, Jürgen; Schmauder, Martin; Kliegel, Matthias; Schmidt, Klaus-Helmut

    2014-01-01

    Prior research suggests that ergonomics work design and mixed teams (in age and gender) may compensate declines in certain abilities of ageing employees. This study investigates simultaneous effects of both team level factors on absenteeism and performance (error rates) over one year in a sample of 56 car assembly teams (N = 623). Results show that age was related to prolonged absenteeism and more mistakes in work planning, but not to overall performance. In comparison, high-physical workload was strongly associated with longer absenteeism and increased error rates. Furthermore, controlling for physical workload, age diversity was related to shorter absenteeism, and the presence of females in the team was associated with shorter absenteeism and better performance. In summary, this study suggests that both ergonomics work design and mixed team composition may compensate age-related productivity risks in manufacturing by maintaining the work ability of older employees and improving job quality.

  17. Seamless Mobile Multimedia Broadcasting Using Adaptive Error Recovery

    Directory of Open Access Journals (Sweden)

    Carlos M. Lentisco

    2017-01-01

    Full Text Available Multimedia services over mobile networks present several challenges, such as ensuring a reliable delivery of multimedia content, avoiding undesired service disruptions, or reducing service latency. HTTP adaptive streaming addresses these problems for multimedia unicast services, but it is not efficient from the point of view of radio resource consumption. In Long-Term Evolution (LTE networks, multimedia broadcast services are provided over a common radio channel using a combination of forward error correction and unicast error recovery techniques at the application level. This paper discusses how to avoid service disruptions and reduce service latency for LTE multimedia broadcast services by adding dynamic adaptation capabilities to the unicast error recovery process. The proposed solution provides a seamless mobile multimedia broadcasting without compromising the quality of the service perceived by the users.

  18. The concept of error and malpractice in radiology.

    Science.gov (United States)

    Pinto, Antonio; Brunese, Luca; Pinto, Fabio; Reali, Riccardo; Daniele, Stefania; Romano, Luigia

    2012-08-01

    Since the early 1970s, physicians have been subjected to an increasing number of medical malpractice claims. Radiology is one of the specialties most liable to claims of medical negligence. The etiology of radiological error is multifactorial. Errors fall into recurrent patterns. Errors arise from poor technique, failures of perception, lack of knowledge, and misjudgments. Every radiologist should understand the sources of error in diagnostic radiology as well as the elements of negligence that form the basis of malpractice litigation. Errors are an inevitable part of human life, and every health professional has made mistakes. To improve patient safety and reduce the risk from harm, we must accept that some errors are inevitable during the delivery of health care. We must play a cultural change in medicine, wherein errors are actively sought, openly discussed, and aggressively addressed. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Systematic Uncertainties in Black Hole Masses Determined from Single Epoch Spectra

    DEFF Research Database (Denmark)

    Denney, Kelly D.; Peterson, Bradley M.; Dietrich, Matthias

    2008-01-01

    We explore the nature of systematic errors that can arise in measurement of black hole masses from single-epoch spectra of active galactic nuclei (AGNs) by utilizing the many epochs available for NGC 5548 and PG1229+204 from reverberation mapping databases. In particular, we examine systematics due...

  20. Effective strategies to reduce commercial tobacco use in Indigenous communities globally: A systematic review.

    Science.gov (United States)

    Minichiello, Alexa; Lefkowitz, Ayla R F; Firestone, Michelle; Smylie, Janet K; Schwartz, Robert

    2016-01-11

    All over the world, Indigenous populations have remarkably high rates of commercial tobacco use compared to non-Indigenous groups. The high rates of commercial tobacco use in Indigenous populations have led to a variety of health issues and lower life expectancy than the general population. The objectives of this systematic review were to investigate changes in the initiation, consumption and quit rates of commercial tobacco use as well as changes in knowledge, prevalence, community interest, and smoke-free environments in Indigenous populations. We also aimed to understand which interventions had broad reach, what the common elements that supported positive change were and how Aboriginal self-determination was reflected in program implementation. We undertook a systematic review of peer-reviewed publications and grey literature selected from seven databases and 43 electronic sources. We included studies between 1994 and 2015 if they addressed an intervention (including provision of a health service or program, education or training programs) aimed to reduce the use of commercial tobacco use in Indigenous communities globally. Systematic cross-regional canvassing of informants in Canada and internationally with knowledge of Indigenous health and/or tobacco control provided further leads about commercial tobacco reduction interventions. We extracted data on program characteristics, study design and learnings including successes and challenges. In the process of this review, we investigated 73 commercial tobacco control interventions in Indigenous communities globally. These interventions incorporated a myriad of activities to reduce, cease or protect Indigenous peoples from the harms of commercial tobacco use. Interventions were successful in producing positive changes in initiation, consumption and quit rates. Interventions also facilitated increases in the number of smoke-free environments, greater understandings of the harms of commercial tobacco use and a