WorldWideScience

Sample records for residual systematic error

  1. Statistical tests against systematic errors in data sets based on the equality of residual means and variances from control samples: theory and applications.

    Science.gov (United States)

    Henn, Julian; Meindl, Kathrin

    2015-03-01

    Statistical tests are applied for the detection of systematic errors in data sets from least-squares refinements or other residual-based reconstruction processes. Samples of the residuals of the data are tested against the hypothesis that they belong to the same distribution. For this it is necessary that they show the same mean values and variances within the limits given by statistical fluctuations. When the samples differ significantly from each other, they are not from the same distribution within the limits set by the significance level. Therefore they cannot originate from a single Gaussian function in this case. It is shown that a significance cutoff results in exactly this case. Significance cutoffs are still frequently used in charge-density studies. The tests are applied to artificial data with and without systematic errors and to experimental data from the literature.

  2. IGS Rapid Orbits: Systematic Error at Day Boundaries

    National Research Council Canada - National Science Library

    Slabinski, Victor J

    2006-01-01

    ... +2 to +13 cm. IGS Final orbits show similar discontinuities at each 00 hr GPS. The biased residual discontinuities reflect a discontinuity in Rapid orbit systematic position error across day boundaries...

  3. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  4. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  5. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  6. Pencil kernel correction and residual error estimation for quality-index-based dose calculations

    International Nuclear Information System (INIS)

    Nyholm, Tufve; Olofsson, Joergen; Ahnesjoe, Anders; Georg, Dietmar; Karlsson, Mikael

    2006-01-01

    Experimental data from 593 photon beams were used to quantify the errors in dose calculations using a previously published pencil kernel model. A correction of the kernel was derived in order to remove the observed systematic errors. The remaining residual error for individual beams was modelled through uncertainty associated with the kernel model. The methods were tested against an independent set of measurements. No significant systematic error was observed in the calculations using the derived correction of the kernel and the remaining random errors were found to be adequately predicted by the proposed method

  7. Systematic sampling with errors in sample locations

    DEFF Research Database (Denmark)

    Ziegel, Johanna; Baddeley, Adrian; Dorph-Petersen, Karl-Anton

    2010-01-01

    is exactly periodic; real physical sampling procedures may introduce errors in the placement of the sample points. This paper studies the effect of errors in sample positioning on the variance of estimators in the case of one-dimensional systematic sampling. First we sketch a general approach to variance...

  8. Measuring Systematic Error with Curve Fits

    Science.gov (United States)

    Rupright, Mark E.

    2011-01-01

    Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…

  9. Evaluation of Data with Systematic Errors

    International Nuclear Information System (INIS)

    Froehner, F. H.

    2003-01-01

    Application-oriented evaluated nuclear data libraries such as ENDF and JEFF contain not only recommended values but also uncertainty information in the form of 'covariance' or 'error files'. These can neither be constructed nor utilized properly without a thorough understanding of uncertainties and correlations. It is shown how incomplete information about errors is described by multivariate probability distributions or, more summarily, by covariance matrices, and how correlations are caused by incompletely known common errors. Parameter estimation for the practically most important case of the Gaussian distribution with common errors is developed in close analogy to the more familiar case without. The formalism shows that, contrary to widespread belief, common ('systematic') and uncorrelated ('random' or 'statistical') errors are to be added in quadrature. It also shows explicitly that repetition of a measurement reduces mainly the statistical uncertainties but not the systematic ones. While statistical uncertainties are readily estimated from the scatter of repeatedly measured data, systematic uncertainties can only be inferred from prior information about common errors and their propagation. The optimal way to handle error-affected auxiliary quantities ('nuisance parameters') in data fitting and parameter estimation is to adjust them on the same footing as the parameters of interest and to integrate (marginalize) them out of the joint posterior distribution afterward

  10. Systematic Review of Errors in Inhaler Use

    DEFF Research Database (Denmark)

    Sanchis, Joaquin; Gich, Ignasi; Pedersen, Søren

    2016-01-01

    in these outcomes over these 40 years and when partitioned into years 1 to 20 and years 21 to 40. Analyses were conducted in accordance with recommendations from Preferred Reporting Items for Systematic Reviews and Meta-Analyses and Strengthening the Reporting of Observational Studies in Epidemiology. Results Data...... A systematic search for articles reporting direct observation of inhaler technique by trained personnel covered the period from 1975 to 2014. Outcomes were the nature and frequencies of the three most common errors; the percentage of patients demonstrating correct, acceptable, or poor technique; and variations...... were extracted from 144 articles reporting on a total number of 54,354 subjects performing 59,584 observed tests of technique. The most frequent MDI errors were in coordination (45%; 95% CI, 41%-49%), speed and/or depth of inspiration (44%; 40%-47%), and no postinhalation breath-hold (46%; 42...

  11. Systematic errors in long baseline oscillation experiments

    Energy Technology Data Exchange (ETDEWEB)

    Harris, Deborah A.; /Fermilab

    2006-02-01

    This article gives a brief overview of long baseline neutrino experiments and their goals, and then describes the different kinds of systematic errors that are encountered in these experiments. Particular attention is paid to the uncertainties that come about because of imperfect knowledge of neutrino cross sections and more generally how neutrinos interact in nuclei. Near detectors are planned for most of these experiments, and the extent to which certain uncertainties can be reduced by the presence of near detectors is also discussed.

  12. Investigation of systematic errors of metastable "atomic pair" number

    CERN Document Server

    Yazkov, V

    2015-01-01

    Sources of systematic errors in analysis of data, collected in 2012, are analysed. Esti- mations of systematic errors in a number of “atomic pairs” fr om metastable π + π − atoms are presented.

  13. Tropical systematic and random error energetics based on NCEP ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    Systematic error growth rate peak is observed at wavenumber 2 up to 4-day forecast then .... the influence of summer systematic error and ran- ... total exchange. When the error energy budgets are examined in spectral domain, one may ask ques- tions on the error growth at a certain wavenum- ber from its interaction with ...

  14. Dosimetric Implications of Residual Tracking Errors During Robotic SBRT of Liver Metastases

    International Nuclear Information System (INIS)

    Chan, Mark; Grehn, Melanie; Cremers, Florian; Siebert, Frank-Andre; Wurster, Stefan; Huttenlocher, Stefan; Dunst, Jürgen; Hildebrandt, Guido; Schweikard, Achim; Rades, Dirk; Ernst, Floris

    2017-01-01

    Purpose: Although the metric precision of robotic stereotactic body radiation therapy in the presence of breathing motion is widely known, we investigated the dosimetric implications of breathing phase–related residual tracking errors. Methods and Materials: In 24 patients (28 liver metastases) treated with the CyberKnife, we recorded the residual correlation, prediction, and rotational tracking errors from 90 fractions and binned them into 10 breathing phases. The average breathing phase errors were used to shift and rotate the clinical tumor volume (CTV) and planning target volume (PTV) for each phase to calculate a pseudo 4-dimensional error dose distribution for comparison with the original planned dose distribution. Results: The median systematic directional correlation, prediction, and absolute aggregate rotation errors were 0.3 mm (range, 0.1-1.3 mm), 0.01 mm (range, 0.00-0.05 mm), and 1.5° (range, 0.4°-2.7°), respectively. Dosimetrically, 44%, 81%, and 92% of all voxels differed by less than 1%, 3%, and 5% of the planned local dose, respectively. The median coverage reduction for the PTV was 1.1% (range in coverage difference, −7.8% to +0.8%), significantly depending on correlation (P=.026) and rotational (P=.005) error. With a 3-mm PTV margin, the median coverage change for the CTV was 0.0% (range, −1.0% to +5.4%), not significantly depending on any investigated parameter. In 42% of patients, the 3-mm margin did not fully compensate for the residual tracking errors, resulting in a CTV coverage reduction of 0.1% to 1.0%. Conclusions: For liver tumors treated with robotic stereotactic body radiation therapy, a safety margin of 3 mm is not always sufficient to cover all residual tracking errors. Dosimetrically, this translates into only small CTV coverage reductions.

  15. Dosimetric Implications of Residual Tracking Errors During Robotic SBRT of Liver Metastases

    Energy Technology Data Exchange (ETDEWEB)

    Chan, Mark [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Kiel (Germany); Tuen Mun Hospital, Hong Kong (China); Grehn, Melanie [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Lübeck (Germany); Institute for Robotics and Cognitive Systems, University of Lübeck, Lübeck (Germany); Cremers, Florian [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Lübeck (Germany); Siebert, Frank-Andre [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Kiel (Germany); Wurster, Stefan [Saphir Radiosurgery Center Northern Germany, Güstrow (Germany); Department for Radiation Oncology, University Medicine Greifswald, Greifswald (Germany); Huttenlocher, Stefan [Saphir Radiosurgery Center Northern Germany, Güstrow (Germany); Dunst, Jürgen [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Kiel (Germany); Department for Radiation Oncology, University Clinic Copenhagen, Copenhagen (Denmark); Hildebrandt, Guido [Department for Radiation Oncology, University Medicine Rostock, Rostock (Germany); Schweikard, Achim [Institute for Robotics and Cognitive Systems, University of Lübeck, Lübeck (Germany); Rades, Dirk [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Lübeck (Germany); Ernst, Floris [Institute for Robotics and Cognitive Systems, University of Lübeck, Lübeck (Germany); and others

    2017-03-15

    Purpose: Although the metric precision of robotic stereotactic body radiation therapy in the presence of breathing motion is widely known, we investigated the dosimetric implications of breathing phase–related residual tracking errors. Methods and Materials: In 24 patients (28 liver metastases) treated with the CyberKnife, we recorded the residual correlation, prediction, and rotational tracking errors from 90 fractions and binned them into 10 breathing phases. The average breathing phase errors were used to shift and rotate the clinical tumor volume (CTV) and planning target volume (PTV) for each phase to calculate a pseudo 4-dimensional error dose distribution for comparison with the original planned dose distribution. Results: The median systematic directional correlation, prediction, and absolute aggregate rotation errors were 0.3 mm (range, 0.1-1.3 mm), 0.01 mm (range, 0.00-0.05 mm), and 1.5° (range, 0.4°-2.7°), respectively. Dosimetrically, 44%, 81%, and 92% of all voxels differed by less than 1%, 3%, and 5% of the planned local dose, respectively. The median coverage reduction for the PTV was 1.1% (range in coverage difference, −7.8% to +0.8%), significantly depending on correlation (P=.026) and rotational (P=.005) error. With a 3-mm PTV margin, the median coverage change for the CTV was 0.0% (range, −1.0% to +5.4%), not significantly depending on any investigated parameter. In 42% of patients, the 3-mm margin did not fully compensate for the residual tracking errors, resulting in a CTV coverage reduction of 0.1% to 1.0%. Conclusions: For liver tumors treated with robotic stereotactic body radiation therapy, a safety margin of 3 mm is not always sufficient to cover all residual tracking errors. Dosimetrically, this translates into only small CTV coverage reductions.

  16. Dosimetric Implications of Residual Tracking Errors During Robotic SBRT of Liver Metastases.

    Science.gov (United States)

    Chan, Mark; Grehn, Melanie; Cremers, Florian; Siebert, Frank-Andre; Wurster, Stefan; Huttenlocher, Stefan; Dunst, Jürgen; Hildebrandt, Guido; Schweikard, Achim; Rades, Dirk; Ernst, Floris; Blanck, Oliver

    2017-03-15

    Although the metric precision of robotic stereotactic body radiation therapy in the presence of breathing motion is widely known, we investigated the dosimetric implications of breathing phase-related residual tracking errors. In 24 patients (28 liver metastases) treated with the CyberKnife, we recorded the residual correlation, prediction, and rotational tracking errors from 90 fractions and binned them into 10 breathing phases. The average breathing phase errors were used to shift and rotate the clinical tumor volume (CTV) and planning target volume (PTV) for each phase to calculate a pseudo 4-dimensional error dose distribution for comparison with the original planned dose distribution. The median systematic directional correlation, prediction, and absolute aggregate rotation errors were 0.3 mm (range, 0.1-1.3 mm), 0.01 mm (range, 0.00-0.05 mm), and 1.5° (range, 0.4°-2.7°), respectively. Dosimetrically, 44%, 81%, and 92% of all voxels differed by less than 1%, 3%, and 5% of the planned local dose, respectively. The median coverage reduction for the PTV was 1.1% (range in coverage difference, -7.8% to +0.8%), significantly depending on correlation (P=.026) and rotational (P=.005) error. With a 3-mm PTV margin, the median coverage change for the CTV was 0.0% (range, -1.0% to +5.4%), not significantly depending on any investigated parameter. In 42% of patients, the 3-mm margin did not fully compensate for the residual tracking errors, resulting in a CTV coverage reduction of 0.1% to 1.0%. For liver tumors treated with robotic stereotactic body radiation therapy, a safety margin of 3 mm is not always sufficient to cover all residual tracking errors. Dosimetrically, this translates into only small CTV coverage reductions. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Random and Systematic Errors Share in Total Error of Probes for CNC Machine Tools

    Directory of Open Access Journals (Sweden)

    Adam Wozniak

    2018-03-01

    Full Text Available Probes for CNC machine tools, as every measurement device, have accuracy limited by random errors and by systematic errors. Random errors of these probes are described by a parameter called unidirectional repeatability. Manufacturers of probes for CNC machine tools usually specify only this parameter, while parameters describing systematic errors of the probes, such as pre-travel variation or triggering radius variation, are used rarely. Systematic errors of the probes, linked to the differences in pre-travel values for different measurement directions, can be corrected or compensated, but it is not a widely used procedure. In this paper, the share of systematic errors and random errors in total error of exemplary probes are determined. In the case of simple, kinematic probes, systematic errors are much greater than random errors, so compensation would significantly reduce the probing error. Moreover, it shows that in the case of kinematic probes commonly specified unidirectional repeatability is significantly better than 2D performance. However, in the case of more precise strain-gauge probe systematic errors are of the same order as random errors, which means that errors correction or compensation, in this case, would not yield any significant benefits.

  18. A systematic review of medication administration errors with transdermal patches.

    Science.gov (United States)

    Lampert, Anette; Seiberth, Jasmin; Haefeli, Walter E; Seidling, Hanna M

    2014-08-01

    Transdermal patches provide an attractive route of drug delivery with considerable advantages over other routes of administration, for example maintenance of constant plasma drug levels and convenient usage. However, medication administration errors abound with this dosage form and frequently result in harm or treatment failure. A systematic literature search was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines using appropriate keywords to identify articles reporting faulty transdermal patch administration. Common pitfalls and errors that were identified through the systematic literature search were discussed alongside individual steps of the transdermal patch administration process. The systematic investigation of published errors illustrated that every step in the transdermal patch administration process is prone to errors. Thereby, the lack of knowledge and awareness of the importance of a correct administration practice were a major source of risk. Based on the identified errors and causes of errors prevention strategies were developed as a first step in avoiding transdermal patch administration errors.

  19. SHERPA: A systematic human error reduction and prediction approach

    International Nuclear Information System (INIS)

    Embrey, D.E.

    1986-01-01

    This paper describes a Systematic Human Error Reduction and Prediction Approach (SHERPA) which is intended to provide guidelines for human error reduction and quantification in a wide range of human-machine systems. The approach utilizes as its basic current cognitive models of human performance. The first module in SHERPA performs task and human error analyses, which identify likely error modes, together with guidelines for the reduction of these errors by training, procedures and equipment redesign. The second module uses a SARAH approach to quantify the probability of occurrence of the errors identified earlier, and provides cost benefit analyses to assist in choosing the appropriate error reduction approaches in the third module

  20. Identifying systematic DFT errors in catalytic reactions

    DEFF Research Database (Denmark)

    Christensen, Rune; Hansen, Heine Anton; Vegge, Tejs

    2015-01-01

    Using CO2 reduction reactions as examples, we present a widely applicable method for identifying the main source of errors in density functional theory (DFT) calculations. The method has broad applications for error correction in DFT calculations in general, as it relies on the dependence...

  1. Systematic Errors in Dimensional X-ray Computed Tomography

    DEFF Research Database (Denmark)

    that it is possible to compensate them. In dimensional X-ray computed tomography (CT), many physical quantities influence the final result. However, it is important to know which factors in CT measurements potentially lead to systematic errors. In this talk, typical error sources in dimensional X-ray CT are discussed...

  2. RHIC susceptibility to variations in systematic magnetic harmonic errors

    International Nuclear Information System (INIS)

    Dell, G.F.; Peggs, S.; Pilat, F.; Satogata, T.; Tepikian, S.; Trbojevic, D.; Wei, J.

    1994-01-01

    Results of a study to determine the sensitivity of tune to uncertainties of the systematic magnetic harmonic errors in the 8 cm dipoles of RHIC are reported. Tolerances specified to the manufacturer for tooling and fabrication can result in systematic harmonics different from the expected values. Limits on the range of systematic harmonics have been established from magnet calculations, and the impact on tune from such harmonics has been established

  3. The Articulatory Phonetics of /r/ for Residual Speech Errors.

    Science.gov (United States)

    Boyce, Suzanne E

    2015-11-01

    Effective treatment for children with residual speech errors (RSEs) requires in-depth knowledge of articulatory phonetics, but this level of detail may not be provided as part of typical clinical coursework. At a time when new imaging technologies such as ultrasound continue to inform our clinical understanding of speech disorders, incorporating contemporary work in the basic articulatory sciences into clinical training becomes especially important. This is particularly the case for the speech sound most likely to persist among children with RSEs-the North American English rhotic sound, /r/. The goal of this article is to review important information about articulatory phonetics as it affects children with RSE who present with /r/ production difficulties. The data presented are largely drawn from ultrasound and magnetic resonance imaging studies. This information will be placed in a clinical context by comparing productions of typical adult speakers to successful versus misarticulated productions of two children with persistent /r/ difficulties. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  4. Residual rotational set-up errors after daily cone-beam CT image guided radiotherapy of locally advanced cervical cancer

    International Nuclear Information System (INIS)

    Laursen, Louise Vagner; Elstrøm, Ulrik Vindelev; Vestergaard, Anne; Muren, Ludvig P.; Petersen, Jørgen Baltzer; Lindegaard, Jacob Christian; Grau, Cai; Tanderup, Kari

    2012-01-01

    Purpose: Due to the often quite extended treatment fields in cervical cancer radiotherapy, uncorrected rotational set-up errors result in a potential risk of target miss. This study reports on the residual rotational set-up error after using daily cone beam computed tomography (CBCT) to position cervical cancer patients for radiotherapy treatment. Methods and materials: Twenty-five patients with locally advanced cervical cancer had daily CBCT scans (650 CBCTs in total) prior to treatment delivery. We retrospectively analyzed the translational shifts made in the clinic prior to each treatment fraction as well as the residual rotational errors remaining after translational correction. Results: The CBCT-guided couch movement resulted in a mean translational 3D vector correction of 7.4 mm. Residual rotational error resulted in a target shift exceeding 5 mm in 57 of the 650 treatment fractions. Three patients alone accounted for 30 of these fractions. Nine patients had no shifts exceeding 5 mm and 13 patients had 5 or less treatment fractions with such shifts. Conclusion: Twenty-two of the 25 patients have none or few treatment fractions with target shifts larger than 5 mm due to residual rotational error. However, three patients display a significant number of shifts suggesting a more systematic set-up error.

  5. Auto-calibration of Systematic Odometry Errors in Mobile Robots

    DEFF Research Database (Denmark)

    Bak, Martin; Larsen, Thomas Dall; Andersen, Nils Axel

    1999-01-01

    This paper describes the phenomenon of systematic errors in odometry models in mobile robots and looks at various ways of avoiding it by means of auto-calibration. The systematic errors considered are incorrect knowledge of the wheel base and the gains from encoder readings to wheel displacement....... By auto-calibration we mean a standardized procedure which estimates the uncertainties using only on-board equipment such as encoders, an absolute measurement system and filters; no intervention by operator or off-line data processing is necessary. Results are illustrated by a number of simulations...

  6. MUB tomography performance under influence of systematic errors

    Science.gov (United States)

    Sainz, Isabel; García, Andrés; Klimov, Andrei B.

    2018-01-01

    We propose a method for accounting the simplest type of systematic errors in the mutually unbiased bases (MUB) tomography, emerging due to an imperfect (non-orthogonal) preparation of measurement bases. The present approach allows to analyze analytically the performance of MUB tomography in finite systems of an arbitrary (prime) dimension. We compare the estimation error appearing in such an imperfect MUB-based tomography with those intrinsically present in the framework of the symmetric informationally complete positive operator value measure (SIC-POVM) reconstruction scheme and find that MUB tomography outperforms the perfect SIC-POVM tomography including the case of strong errors.

  7. Tropical systematic and random error energetics based on NCEP ...

    Indian Academy of Sciences (India)

    Systematic and random error and their growth rate and different components of growth rate budget in energy/variance form are investigated at wavenumber domain for medium range tropical (30°S-30°N) weather forecast using daily horizontal wind field of 850 hPa up to 5-day forecast for the month of June, 2000 of NCEP ...

  8. Tropical systematic and random error energetics based on NCEP ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    Deterministic predictability in the perspective of systematic and random error and their growth rates and different components of growth rate budgets like flux, pure generation, mixed generation and conversion in energy/variance form are investigated in physical domain for medium range tropical (30◦S–30◦N) weather ...

  9. Study of systematic errors in the luminosity measurement

    International Nuclear Information System (INIS)

    Arima, Tatsumi

    1993-01-01

    The experimental systematic error in the barrel region was estimated to be 0.44 %. This value is derived considering the systematic uncertainties from the dominant sources but does not include uncertainties which are being studied. In the end cap region, the study of shower behavior and clustering effect is under way in order to determine the angular resolution at the low angle edge of the Liquid Argon Calorimeter. We also expect that the systematic error in this region will be less than 1 %. The technical precision of theoretical uncertainty is better than 0.1 % comparing the Tobimatsu-Shimizu program and BABAMC modified by ALEPH. To estimate the physical uncertainty we will use the ALIBABA [9] which includes O(α 2 ) QED correction in leading-log approximation. (J.P.N.)

  10. Study of systematic errors in the luminosity measurement

    Energy Technology Data Exchange (ETDEWEB)

    Arima, Tatsumi [Tsukuba Univ., Ibaraki (Japan). Inst. of Applied Physics

    1993-04-01

    The experimental systematic error in the barrel region was estimated to be 0.44 %. This value is derived considering the systematic uncertainties from the dominant sources but does not include uncertainties which are being studied. In the end cap region, the study of shower behavior and clustering effect is under way in order to determine the angular resolution at the low angle edge of the Liquid Argon Calorimeter. We also expect that the systematic error in this region will be less than 1 %. The technical precision of theoretical uncertainty is better than 0.1 % comparing the Tobimatsu-Shimizu program and BABAMC modified by ALEPH. To estimate the physical uncertainty we will use the ALIBABA [9] which includes O({alpha}{sup 2}) QED correction in leading-log approximation. (J.P.N.).

  11. Medication Errors in the Southeast Asian Countries: A Systematic Review.

    Directory of Open Access Journals (Sweden)

    Shahrzad Salmasi

    Full Text Available Medication error (ME is a worldwide issue, but most studies on ME have been undertaken in developed countries and very little is known about ME in Southeast Asian countries. This study aimed systematically to identify and review research done on ME in Southeast Asian countries in order to identify common types of ME and estimate its prevalence in this region.The literature relating to MEs in Southeast Asian countries was systematically reviewed in December 2014 by using; Embase, Medline, Pubmed, ProQuest Central and the CINAHL. Inclusion criteria were studies (in any languages that investigated the incidence and the contributing factors of ME in patients of all ages.The 17 included studies reported data from six of the eleven Southeast Asian countries: five studies in Singapore, four in Malaysia, three in Thailand, three in Vietnam, one in the Philippines and one in Indonesia. There was no data on MEs in Brunei, Laos, Cambodia, Myanmar and Timor. Of the seventeen included studies, eleven measured administration errors, four focused on prescribing errors, three were done on preparation errors, three on dispensing errors and two on transcribing errors. There was only one study of reconciliation error. Three studies were interventional.The most frequently reported types of administration error were incorrect time, omission error and incorrect dose. Staff shortages, and hence heavy workload for nurses, doctor/nurse distraction, and misinterpretation of the prescription/medication chart, were identified as contributing factors of ME. There is a serious lack of studies on this topic in this region which needs to be addressed if the issue of ME is to be fully understood and addressed.

  12. Medication Errors in the Southeast Asian Countries: A Systematic Review.

    Science.gov (United States)

    Salmasi, Shahrzad; Khan, Tahir Mehmood; Hong, Yet Hoi; Ming, Long Chiau; Wong, Tin Wui

    2015-01-01

    Medication error (ME) is a worldwide issue, but most studies on ME have been undertaken in developed countries and very little is known about ME in Southeast Asian countries. This study aimed systematically to identify and review research done on ME in Southeast Asian countries in order to identify common types of ME and estimate its prevalence in this region. The literature relating to MEs in Southeast Asian countries was systematically reviewed in December 2014 by using; Embase, Medline, Pubmed, ProQuest Central and the CINAHL. Inclusion criteria were studies (in any languages) that investigated the incidence and the contributing factors of ME in patients of all ages. The 17 included studies reported data from six of the eleven Southeast Asian countries: five studies in Singapore, four in Malaysia, three in Thailand, three in Vietnam, one in the Philippines and one in Indonesia. There was no data on MEs in Brunei, Laos, Cambodia, Myanmar and Timor. Of the seventeen included studies, eleven measured administration errors, four focused on prescribing errors, three were done on preparation errors, three on dispensing errors and two on transcribing errors. There was only one study of reconciliation error. Three studies were interventional. The most frequently reported types of administration error were incorrect time, omission error and incorrect dose. Staff shortages, and hence heavy workload for nurses, doctor/nurse distraction, and misinterpretation of the prescription/medication chart, were identified as contributing factors of ME. There is a serious lack of studies on this topic in this region which needs to be addressed if the issue of ME is to be fully understood and addressed.

  13. On the effect of systematic errors in near real time accountancy

    International Nuclear Information System (INIS)

    Avenhaus, R.

    1987-01-01

    Systematic measurement errors have a decisive impact on nuclear materials accountancy. This has been demonstrated at various occasions for a fixed number of inventory periods, i.e. for situations where the overall probability of detection is taken as the measure of effectiveness. In the framework of Near Real Time Accountancy (NRTA), however, such analyses have not yet been performed. In this paper sequential test procedures are considered which are based on the so-called MUF-Residuals. It is shown that, if the decision maker does not know the systematic error variance, the average run lengths tend towards infinity if this variance is equal or longer than that of the random error. Furthermore, if the decision maker knows this invariance, the average run length for constant loss or diversion is not shorter than that without loss or diversion. These results cast some doubt on the present practice of data evaluation where systematic errors are tacitly assumed to persist for an infinite time. In fact, information about the time dependence of the variances of these errors has to be gathered in order that the efficiency of NRTA evaluation methods can be estimated realistically

  14. Weak gravitational lensing systematic errors in the dark energy survey

    Science.gov (United States)

    Plazas, Andres Alejandro

    Dark energy is one of the most important unsolved problems in modern Physics, and weak gravitational lensing (WL) by mass structures along the line of sight ("cosmic shear") is a promising technique to learn more about its nature. However, WL is subject to numerous systematic errors which induce biases in measured cosmological parameters and prevent the development of its full potential. In this thesis, we advance the understanding of WL systematics in the context of the Dark Energy Survey (DES). We develop a testing suite to assess the performance of the shapelet-based DES WL measurement pipeline. We determine that the measurement bias of the parameters of our Point Spread Function (PSF) model scales as (S/N )-2, implying that a PSF S/N > 75 is needed to satisfy DES requirements. PSF anisotropy suppression also satisfies the requirements for source galaxies with S/N ≳ 45. For low-noise, marginally-resolved exponential galaxies, the shear calibration errors are up to about 0.06% (for shear values ≲ 0.075). Galaxies with S/N ≳ 75 present about 1% errors, sufficient for first-year DES data. However, more work is needed to satisfy full-area DES requirements, especially in the high-noise regime. We then implement tests to validate the high accuracy of the map between pixel coordinates and sky coordinates (astrometric solution), which is crucial to detect the required number of galaxies for WL in stacked images. We also study the effect of atmospheric dispersion on cosmic shear experiments such as DES and the Large Synoptic Survey Telescope (LSST) in the four griz bands. For DES (LSST), we find systematics in the g and r (g, r, and i) bands that are larger than required. We find that a simple linear correction in galaxy color is accurate enough to reduce dispersion shear systematics to insignificant levels in the r ( i) band for DES (LSST). More complex corrections will likely reduce the systematic cosmic-shear errors below statistical errors for LSST r band

  15. RESIDUAL LIMB VOLUME CHANGE: SYSTEMATIC REVIEW OF MEASUREMENT AND MANAGEMENT

    Science.gov (United States)

    Sanders, JE; Fatone, S

    2014-01-01

    Management of residual limb volume affects decisions regarding timing of fit of the first prosthesis, when a new prosthetic socket is needed, design of a prosthetic socket, and prescription of accommodation strategies for daily volume fluctuations. The purpose of this systematic review was to assess what is known about measurement and management of residual limb volume change in persons with lower-limb amputation. Publications that met inclusion criteria were grouped into three categories: (I) descriptions of residual limb volume measurement techniques; (II) studies on people with lower-limb amputation investigating the effect of residual limb volume change on clinical care; and (III) studies of residual limb volume management techniques or descriptions of techniques for accommodating or controlling residual limb volume. The review showed that many techniques for the measurement of residual limb volume have been described but clinical use is limited largely because current techniques lack adequate resolution and in-socket measurement capability. Overall, there is limited evidence regarding the management of residual limb volume, and the evidence available focuses primarily on adults with trans-tibial amputation in the early post-operative phase. While we can draw some insights from the available research about residual limb volume measurement and management, further research is required. PMID:22068373

  16. Differential effects of visual-acoustic biofeedback intervention for residual speech errors

    Directory of Open Access Journals (Sweden)

    Tara Mcallister Byun

    2016-11-01

    Full Text Available Recent evidence suggests that the incorporation of visual biofeedback technologies may enhance response to treatment in individuals with residual speech errors. However, there is a need for controlled research systematically comparing biofeedback versus non-biofeedback intervention approaches. This study implemented a single-subject experimental design with a crossover component to investigate the relative efficacy of visual-acoustic biofeedback and traditional articulatory treatment for residual rhotic errors. Eleven child/adolescent participants received ten sessions of visual-acoustic biofeedback and ten sessions of traditional treatment, with the order of biofeedback and traditional phases counterbalanced across participants. Probe measures eliciting untreated rhotic words were administered in at least 3 sessions prior to the start of treatment (baseline, between the two treatment phases (midpoint, and after treatment ended (maintenance, as well as before and after each treatment session. Perceptual accuracy of rhotic production was assessed by outside listeners in a blinded, randomized fashion. Results were analyzed using a combination of visual inspection of treatment trajectories, individual effect sizes, and logistic mixed-effects regression. Effect sizes and visual inspection revealed that participants could be divided into categories of strong responders (n=4, mixed/moderate responders (n=3, and non-responders (n=4. Individual results did not reveal a reliable pattern of stronger performance in biofeedback versus traditional blocks, or vice versa. Moreover, biofeedback versus traditional treatment was not a significant predictor of accuracy in the logistic mixed-effects model examining all within-treatment word probes. However, the interaction between treatment condition and treatment order was significant: biofeedback was more effective than traditional treatment in the first phase of treatment, and traditional treatment was more

  17. LÉVY-BASED ERROR PREDICTION IN CIRCULAR SYSTEMATIC SAMPLING

    Directory of Open Access Journals (Sweden)

    Kristjana Ýr Jónsdóttir

    2013-06-01

    Full Text Available In the present paper, Lévy-based error prediction in circular systematic sampling is developed. A model-based statistical setting as in Hobolth and Jensen (2002 is used, but the assumption that the measurement function is Gaussian is relaxed. The measurement function is represented as a periodic stationary stochastic process X obtained by a kernel smoothing of a Lévy basis. The process X may have an arbitrary covariance function. The distribution of the error predictor, based on measurements in n systematic directions is derived. Statistical inference is developed for the model parameters in the case where the covariance function follows the celebrated p-order covariance model.

  18. A method to evaluate residual phase error for polar formatted synthetic aperture radar systems

    Science.gov (United States)

    Musgrove, Cameron; Naething, Richard

    2013-05-01

    Synthetic aperture radar systems that use the polar format algorithm are subject to a focused scene size limit inherent to the polar format algorithm. The classic focused scene size limit is determined from the dominant residual range phase error term. Given the many sources of phase error in a synthetic aperture radar, a system designer is interested in how much phase error results from the assumptions made with the polar format algorithm. Autofocus algorithms have limits to the amount and type of phase error that can be corrected. Current methods correct only one or a few terms of the residual phase error. A system designer needs to be able to evaluate the contribution of the residual or uncorrected phase error terms to determine the new focused scene size limit. This paper describes a method to estimate the complete residual phase error, not just one or a few of the dominant residual terms. This method is demonstrated with polar format image formation, but is equally applicable to other image formation algorithms. A benefit for the system designer is that additional correction terms can be added or deleted from the analysis as necessary to evaluate the resulting effect upon image quality.

  19. Probabilistic modeling of systematic errors in two-hybrid experiments.

    Science.gov (United States)

    Sontag, David; Singh, Rohit; Berger, Bonnie

    2007-01-01

    We describe a novel probabilistic approach to estimating errors in two-hybrid (2H) experiments. Such experiments are frequently used to elucidate protein-protein interaction networks in a high-throughput fashion; however, a significant challenge with these is their relatively high error rate, specifically, a high false-positive rate. We describe a comprehensive error model for 2H data, accounting for both random and systematic errors. The latter arise from limitations of the 2H experimental protocol: in theory, the reporting mechanism of a 2H experiment should be activated if and only if the two proteins being tested truly interact; in practice, even in the absence of a true interaction, it may be activated by some proteins - either by themselves or through promiscuous interaction with other proteins. We describe a probabilistic relational model that explicitly models the above phenomenon and use Markov Chain Monte Carlo (MCMC) algorithms to compute both the probability of an observed 2H interaction being true as well as the probability of individual proteins being self-activating/promiscuous. This is the first approach that explicitly models systematic errors in protein-protein interaction data; in contrast, previous work on this topic has modeled errors as being independent and random. By explicitly modeling the sources of noise in 2H systems, we find that we are better able to make use of the available experimental data. In comparison with Bader et al.'s method for estimating confidence in 2H predicted interactions, the proposed method performed 5-10% better overall, and in particular regimes improved prediction accuracy by as much as 76%. http://theory.csail.mit.edu/probmod2H

  20. Interventions to reduce pediatric medication errors: a systematic review.

    Science.gov (United States)

    Rinke, Michael L; Bundy, David G; Velasquez, Christina A; Rao, Sandesh; Zerhouni, Yasmin; Lobner, Katie; Blanck, Jaime F; Miller, Marlene R

    2014-08-01

    Medication errors cause appreciable morbidity and mortality in children. The objective was to determine the effectiveness of interventions to reduce pediatric medication errors, identify gaps in the literature, and perform meta-analyses on comparable studies. Relevant studies were identified from searches of PubMed, Embase, Scopus, Web of Science, the Cochrane Library, and the Cumulative Index to Nursing Allied Health Literature and previous systematic reviews. Inclusion criteria were peer-reviewed original data in any language testing an intervention to reduce medication errors in children. Abstract and full-text article review were conducted by 2 independent authors with sequential data extraction. A total of 274 full-text articles were reviewed and 63 were included. Only 1% of studies were conducted at community hospitals, 11% were conducted in ambulatory populations, 10% reported preventable adverse drug events, 10% examined administering errors, 3% examined dispensing errors, and none reported cost-effectiveness data, suggesting persistent research gaps. Variation existed in the methods, definitions, outcomes, and rate denominators for all studies; and many showed an appreciable risk of bias. Although 26 studies (41%) involved computerized provider order entry, a meta-analysis was not performed because of methodologic heterogeneity. Studies of computerized provider order entry with clinical decision support compared with studies without clinical decision support reported a 36% to 87% reduction in prescribing errors; studies of preprinted order sheets revealed a 27% to 82% reduction in prescribing errors. Pediatric medication errors can be reduced, although our understanding of optimal interventions remains hampered. Research should focus on understudied areas, use standardized definitions and outcomes, and evaluate cost-effectiveness. Copyright © 2014 by the American Academy of Pediatrics.

  1. Residual-based a posteriori error estimation for multipoint flux mixed finite element methods

    KAUST Repository

    Du, Shaohong

    2015-10-26

    A novel residual-type a posteriori error analysis technique is developed for multipoint flux mixed finite element methods for flow in porous media in two or three space dimensions. The derived a posteriori error estimator for the velocity and pressure error in L-norm consists of discretization and quadrature indicators, and is shown to be reliable and efficient. The main tools of analysis are a locally postprocessed approximation to the pressure solution of an auxiliary problem and a quadrature error estimate. Numerical experiments are presented to illustrate the competitive behavior of the estimator.

  2. A Refined Algorithm On The Estimation Of Residual Motion Errors In Airborne SAR Images

    Science.gov (United States)

    Zhong, Xuelian; Xiang, Maosheng; Yue, Huanyin; Guo, Huadong

    2010-10-01

    Due to the lack of accuracy in the navigation system, residual motion errors (RMEs) frequently appear in the airborne SAR image. For very high resolution SAR imaging and repeat-pass SAR interferometry, the residual motion errors must be estimated and compensated. We have proposed a new algorithm before to estimate the residual motion errors for an individual SAR image. It exploits point-like targets distributed along the azimuth direction, and not only corrects the phase, but also improves the azimuth focusing. But the required point targets are selected by hand, which is time- and labor-consuming. In addition, the algorithm is sensitive to noises. In this paper, a refined algorithm is proposed aiming at these two shortcomings. With real X-band airborne SAR data, the feasibility and accuracy of the refined algorithm are demonstrated.

  3. A study of systematic errors in the PMD CamBoard nano

    Science.gov (United States)

    Chow, Jacky C. K.; Lichti, Derek D.

    2013-04-01

    Time-of-flight-based three-dimensional cameras are the state-of-the-art imaging modality for acquiring rapid 3D position information. Unlike any other technology on the market, it can deliver 2D images co-located with distance information at every pixel location, without any shadows. Recent technological advancements have begun miniaturizing such technology to be more suitable for laptops and eventually cellphones. This paper explores the systematic errors inherent to the new PMD CamBoard nano camera. As the world's most compact 3D time-of-flight camera it has applications in a wide domain, such as gesture control and facial recognition. To model the systematic errors, a one-step point-based and plane-based bundle adjustment method is used. It simultaneously estimates all systematic errors and unknown parameters by minimizing the residuals of image measurements, distance measurements, and amplitude measurements in a least-squares sense. The presented self-calibration method only requires a standard checkerboard target on a flat plane, making it a suitable candidate for on-site calibration. In addition, because distances are only constrained to lie on a plane, the raw pixel-by-pixel distance observations can be used. This makes it possible to increase the number of distance observations in the adjustment with ease. The results from this paper indicate that amplitude dependent range errors are the dominant error source for the nano under low scattering imaging configurations. Post user self-calibration, the RMSE of the range observations reduced by almost 50%, delivering range measurements at a precision of approximately 2.5cm within a 70cm interval.

  4. Fibonacci collocation method with a residual error Function to solve linear Volterra integro differential equations

    Directory of Open Access Journals (Sweden)

    Salih Yalcinbas

    2016-01-01

    Full Text Available In this paper, a new collocation method based on the Fibonacci polynomials is introduced to solve the high-order linear Volterra integro-differential equations under the conditions. Numerical examples are included to demonstrate the applicability and validity of the proposed method and comparisons are made with the existing results. In addition, an error estimation based on the residual functions is presented for this method. The approximate solutions are improved by using this error estimation.

  5. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-Area Sky Surveys

    Energy Technology Data Exchange (ETDEWEB)

    Li, T. S. [et al.

    2016-05-27

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is stable in time and uniform over the sky to 1% precision or better. Past surveys have achieved photometric precision of 1-2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors using photometry from the Dark Energy Survey (DES) as an example. We define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes, when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the systematic chromatic errors caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane, can be up to 2% in some bandpasses. We compare the calculated systematic chromatic errors with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput. The residual after correction is less than 0.3%. We also find that the errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.

  6. Medication administration errors and the pediatric population: a systematic search of the literature.

    Science.gov (United States)

    Gonzales, Kelly

    2010-12-01

    There are a variety of factors that make the pediatric population more susceptible to medication errors and potential complications resulting from medication administration including the availability of different dosage forms of the same medication, incorrect dosing, lack of standardized dosing regimen, and organ system maturity. A systematic literature search on medication administration errors in the pediatric population was conducted. Five themes obtained from the systematic literature search include incidence rate of medication administration errors; specific medications involved in medication administration errors and classification of the errors; why medication administration errors occur; medication error reporting; and interventions to reduce medication errors. Copyright © 2010 Elsevier Inc. All rights reserved.

  7. Systematic literature review of hospital medication administration errors in children.

    Science.gov (United States)

    Ameer, Ahmed; Dhillon, Soraya; Peters, Mark J; Ghaleb, Maisoon

    2015-01-01

    Medication administration is the last step in the medication process. It can act as a safety net to prevent unintended harm to patients if detected. However, medication administration errors (MAEs) during this process have been documented and thought to be preventable. In pediatric medicine, doses are usually administered based on the child's weight or body surface area. This in turn increases the risk of drug miscalculations and therefore MAEs. The aim of this review is to report MAEs occurring in pediatric inpatients. Twelve bibliographic databases were searched for studies published between January 2000 and February 2015 using "medication administration errors", "hospital", and "children" related terminologies. Handsearching of relevant publications was also carried out. A second reviewer screened articles for eligibility and quality in accordance with the inclusion/exclusion criteria. A total of 44 studies were systematically reviewed. MAEs were generally defined as a deviation of dose given from that prescribed; this included omitted doses and administration at the wrong time. Hospital MAEs in children accounted for a mean of 50% of all reported medication error reports (n=12,588). It was also identified in a mean of 29% of doses observed (n=8,894). The most prevalent type of MAEs related to preparation, infusion rate, dose, and time. This review has identified five types of interventions to reduce hospital MAEs in children: barcode medicine administration, electronic prescribing, education, use of smart pumps, and standard concentration. This review has identified a wide variation in the prevalence of hospital MAEs in children. This is attributed to the definition and method used to investigate MAEs. The review also illustrated the complexity and multifaceted nature of MAEs. Therefore, there is a need to develop a set of safety measures to tackle these errors in pediatric practice.

  8. Minimizing Actuator-Induced Residual Error in Active Space Telescope Primary Mirrors

    Science.gov (United States)

    2010-09-01

    modeling process using Matlab and MSC Nastran to sim- ulate actuator-induced residual error. . . . . . . . . . . . . . . . . . 47 3-3 Finite element mirror...automatically gener- ates the structural design of space telescope via Nastran , adds representative dynamic disturbances, simulates the application of...polynomials and Bessel functions. The authors employ a piezoelectrically- actuated membrane mirror model implemented using MSC Nastran to calculate the

  9. Impact of residual and intrafractional errors on strategy of correction for image-guided accelerated partial breast irradiation

    Directory of Open Access Journals (Sweden)

    Guo Xiao-Mao

    2010-10-01

    Full Text Available Abstract Background The cone beam CT (CBCT guided radiation can reduce the systematic and random setup errors as compared to the skin-mark setup. However, the residual and intrafractional (RAIF errors are still unknown. The purpose of this paper is to investigate the magnitude of RAIF errors and correction action levels needed in cone beam computed tomography (CBCT guided accelerated partial breast irradiation (APBI. Methods Ten patients were enrolled in the prospective study of CBCT guided APBI. The postoperative tumor bed was irradiated with 38.5 Gy in 10 fractions over 5 days. Two cone-beam CT data sets were obtained with one before and one after the treatment delivery. The CBCT images were registered online to the planning CT images using the automatic algorithm followed by a fine manual adjustment. An action level of 3 mm, meaning that corrections were performed for translations exceeding 3 mm, was implemented in clinical treatments. Based on the acquired data, different correction action levels were simulated, and random RAIF errors, systematic RAIF errors and related margins before and after the treatments were determined for varying correction action levels. Results A total of 75 pairs of CBCT data sets were analyzed. The systematic and random setup errors based on skin-mark setup prior to treatment delivery were 2.1 mm and 1.8 mm in the lateral (LR, 3.1 mm and 2.3 mm in the superior-inferior (SI, and 2.3 mm and 2.0 mm in the anterior-posterior (AP directions. With the 3 mm correction action level, the systematic and random RAIF errors were 2.5 mm and 2.3 mm in the LR direction, 2.3 mm and 2.3 mm in the SI direction, and 2.3 mm and 2.2 mm in the AP direction after treatments delivery. Accordingly, the margins for correction action levels of 3 mm, 4 mm, 5 mm, 6 mm and no correction were 7.9 mm, 8.0 mm, 8.0 mm, 7.9 mm and 8.0 mm in the LR direction; 6.4 mm, 7.1 mm, 7.9 mm, 9.2 mm and 10.5 mm in the SI direction; 7.6 mm, 7.9 mm, 9.4 mm, 10

  10. Systematic Identification of Machine-Learning Models Aimed to Classify Critical Residues for Protein Function from Protein Structure.

    Science.gov (United States)

    Corral-Corral, Ricardo; Beltrán, Jesús A; Brizuela, Carlos A; Del Rio, Gabriel

    2017-10-09

    Protein structure and protein function should be related, yet the nature of this relationship remains unsolved. Mapping the critical residues for protein function with protein structure features represents an opportunity to explore this relationship, yet two important limitations have precluded a proper analysis of the structure-function relationship of proteins: (i) the lack of a formal definition of what critical residues are and (ii) the lack of a systematic evaluation of methods and protein structure features. To address this problem, here we introduce an index to quantify the protein-function criticality of a residue based on experimental data and a strategy aimed to optimize both, descriptors of protein structure (physicochemical and centrality descriptors) and machine learning algorithms, to minimize the error in the classification of critical residues. We observed that both physicochemical and centrality descriptors of residues effectively relate protein structure and protein function, and that physicochemical descriptors better describe critical residues. We also show that critical residues are better classified when residue criticality is considered as a binary attribute (i.e., residues are considered critical or not critical). Using this binary annotation for critical residues 8 models rendered accurate and non-overlapping classification of critical residues, confirming the multi-factorial character of the structure-function relationship of proteins.

  11. Systematic literature review of hospital medication administration errors in children

    Directory of Open Access Journals (Sweden)

    Ameer A

    2015-11-01

    Full Text Available Ahmed Ameer,1 Soraya Dhillon,1 Mark J Peters,2 Maisoon Ghaleb11Department of Pharmacy, School of Life and Medical Sciences, University of Hertfordshire, Hatfield, UK; 2Paediatric Intensive Care Unit, Great Ormond Street Hospital, London, UK Objective: Medication administration is the last step in the medication process. It can act as a safety net to prevent unintended harm to patients if detected. However, medication administration errors (MAEs during this process have been documented and thought to be preventable. In pediatric medicine, doses are usually administered based on the child's weight or body surface area. This in turn increases the risk of drug miscalculations and therefore MAEs. The aim of this review is to report MAEs occurring in pediatric inpatients. Methods: Twelve bibliographic databases were searched for studies published between January 2000 and February 2015 using “medication administration errors”, “hospital”, and “children” related terminologies. Handsearching of relevant publications was also carried out. A second reviewer screened articles for eligibility and quality in accordance with the inclusion/exclusion criteria. Key findings: A total of 44 studies were systematically reviewed. MAEs were generally defined as a deviation of dose given from that prescribed; this included omitted doses and administration at the wrong time. Hospital MAEs in children accounted for a mean of 50% of all reported medication error reports (n=12,588. It was also identified in a mean of 29% of doses observed (n=8,894. The most prevalent type of MAEs related to preparation, infusion rate, dose, and time. This review has identified five types of interventions to reduce hospital MAEs in children: barcode medicine administration, electronic prescribing, education, use of smart pumps, and standard concentration. Conclusion: This review has identified a wide variation in the prevalence of hospital MAEs in children. This is attributed to

  12. Structural brain differences in school-age children with residual speech sound errors.

    Science.gov (United States)

    Preston, Jonathan L; Molfese, Peter J; Mencl, W Einar; Frost, Stephen J; Hoeft, Fumiko; Fulbright, Robert K; Landi, Nicole; Grigorenko, Elena L; Seki, Ayumi; Felsenfeld, Susan; Pugh, Kenneth R

    2014-01-01

    The purpose of the study was to identify structural brain differences in school-age children with residual speech sound errors. Voxel based morphometry was used to compare gray and white matter volumes for 23 children with speech sound errors, ages 8;6-11;11, and 54 typically speaking children matched on age, oral language, and IQ. We hypothesized that regions associated with production and perception of speech sounds would differ between groups. Results indicated greater gray matter volumes for the speech sound error group relative to typically speaking controls in bilateral superior temporal gyrus. There was greater white matter volume in the corpus callosum for the speech sound error group, but less white matter volume in right lateral occipital gyrus. Results may indicate delays in neuronal pruning in critical speech regions or differences in the development of networks for speech perception and production. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Compensation for straightness measurement systematic errors in six degree-of-freedom motion error simultaneous measurement system.

    Science.gov (United States)

    Cui, Cunxing; Feng, Qibo; Zhang, Bin

    2015-04-10

    The straightness measurement systematic errors induced by error crosstalk, fabrication and installation deviation of optical element, measurement sensitivity variation, and the Abbe error in six degree-of-freedom simultaneous measurement system are analyzed in detail in this paper. Models for compensating these systematic errors were established and verified through a series of comparison experiments with the Automated Precision Inc. (API) 5D measurement system, and the experimental results showed that the maximum deviation in straightness error measurement could be reduced from 6.4 to 0.9 μm in the x-direction, and 8.8 to 0.8 μm in the y-direction, after the compensation.

  14. ILRS Activities in Monitoring Systematic Errors in SLR Data

    Science.gov (United States)

    Pavlis, E. C.; Luceri, V.; Kuzmicz-Cieslak, M.; Bianco, G.

    2017-12-01

    The International Laser Ranging Service (ILRS) contributes to ITRF development unique information that only Satellite Laser Ranging—SLR is sensitive to: the definition of the origin, and in equal parts with VLBI, the scale of the model. For the development of ITRF2014, the ILRS analysts adopted a revision of the internal standards and procedures in generating our contribution from the eight ILRS Analysis Centers. The improved results for the ILRS components were reflected in the resulting new time series of the ITRF origin and scale, showing insignificant trends and tighter scatter. This effort was further extended after the release of ITRF2014, with the execution of a Pilot Project (PP) in the 2016-2017 timeframe that demonstrated the robust estimation of persistent systematic errors at the millimeter level. ILRS ASC is now turning this into an operational tool to monitor station performance and to generate a history of systematics at each station, to be used with each re-analysis for future ITRF model developments. This is part of a broader ILRS effort to improve the quality control of the data collection process as well as that of our products. To this end, the ILRS has established a "Quality Control Board—QCB" that comprises of members from the analysis and engineering groups, the Central Bureau, and even user groups with special interests. The QCB meets by telecon monthly and oversees the various ongoing projects, develops ideas for new tools and future products. This presentation will focus on the main topic with an update on the results so far, the schedule for the near future and its operational implementation, along with a brief description of upcoming new ILRS products.

  15. Diffraction grating strain gauge method: error analysis and its application for the residual stress measurement in thermal barrier coatings

    Science.gov (United States)

    Yin, Yuanjie; Fan, Bozhao; He, Wei; Dai, Xianglu; Guo, Baoqiao; Xie, Huimin

    2018-03-01

    Diffraction grating strain gauge (DGSG) is an optical strain measurement method. Based on this method, a six-spot diffraction grating strain gauge (S-DGSG) system has been developed with the advantages of high and adjustable sensitivity, compact structure, and non-contact measurement. In this study, this system is applied for the residual stress measurement in thermal barrier coatings (TBCs) combining the hole-drilling method. During the experiment, the specimen’s location is supposed to be reset accurately before and after the hole-drilling, however, it is found that the rigid body displacements from the resetting process could seriously influence the measurement accuracy. In order to understand and eliminate the effects from the rigid body displacements, such as the three-dimensional (3D) rotations and the out-of-plane displacement of the grating, the measurement error of this system is systematically analyzed, and an optimized method is proposed. Moreover, a numerical experiment and a verified tensile test are conducted, and the results verify the applicability of this optimized method successfully. Finally, combining this optimized method, a residual stress measurement experiment is conducted, and the results show that this method can be applied to measure the residual stress in TBCs.

  16. Black hole spectroscopy: Systematic errors and ringdown energy estimates

    Science.gov (United States)

    Baibhav, Vishal; Berti, Emanuele; Cardoso, Vitor; Khanna, Gaurav

    2018-02-01

    The relaxation of a distorted black hole to its final state provides important tests of general relativity within the reach of current and upcoming gravitational wave facilities. In black hole perturbation theory, this phase consists of a simple linear superposition of exponentially damped sinusoids (the quasinormal modes) and of a power-law tail. How many quasinormal modes are necessary to describe waveforms with a prescribed precision? What error do we incur by only including quasinormal modes, and not tails? What other systematic effects are present in current state-of-the-art numerical waveforms? These issues, which are basic to testing fundamental physics with distorted black holes, have hardly been addressed in the literature. We use numerical relativity waveforms and accurate evolutions within black hole perturbation theory to provide some answers. We show that (i) a determination of the fundamental l =m =2 quasinormal frequencies and damping times to within 1% or better requires the inclusion of at least the first overtone, and preferably of the first two or three overtones; (ii) a determination of the black hole mass and spin with precision better than 1% requires the inclusion of at least two quasinormal modes for any given angular harmonic mode (ℓ , m ). We also improve on previous estimates and fits for the ringdown energy radiated in the various multipoles. These results are important to quantify theoretical (as opposed to instrumental) limits in parameter estimation accuracy and tests of general relativity allowed by ringdown measurements with high signal-to-noise ratio gravitational wave detectors.

  17. IceCube systematic errors investigation: Simulation of the ice

    Energy Technology Data Exchange (ETDEWEB)

    Resconi, Elisa; Wolf, Martin [Max-Planck-Institute for Nuclear Physics, Heidelberg (Germany); Schukraft, Anne [RWTH, Aachen University (Germany)

    2010-07-01

    IceCube is a neutrino observatory for astroparticle and astronomy research at the South Pole. It uses one cubic kilometer of Antartica's deepest ice (1500 m-2500 m in depth) to detect Cherenkov light, generated by charged particles traveling through the ice, with an array of phototubes encapsulated in glass pressure spheres. The arrival time as well as the charge deposited of the detected photons represent the base measurements that are used for track and energy reconstruction of those charged particles. The optical properties of the deep antarctic ice vary from layer to layer. Measurements of the ice properties and their correct modeling in Monte Carlo simulation is then of primary importance for the correct understanding of the IceCube telescope behavior. After a short summary about the different methods to investigate the ice properties and to calibrate the detector, we show how the simulation obtained by using this information compares to the measured data and how systematic errors due to uncertain ice properties are determined in IceCube.

  18. Systematic Error of Acoustic Particle Image Velocimetry and Its Correction

    Directory of Open Access Journals (Sweden)

    Mickiewicz Witold

    2014-08-01

    Full Text Available Particle Image Velocimetry is getting more and more often the method of choice not only for visualization of turbulent mass flows in fluid mechanics, but also in linear and non-linear acoustics for non-intrusive visualization of acoustic particle velocity. Particle Image Velocimetry with low sampling rate (about 15Hz can be applied to visualize the acoustic field using the acquisition synchronized to the excitation signal. Such phase-locked PIV technique is described and used in experiments presented in the paper. The main goal of research was to propose a model of PIV systematic error due to non-zero time interval between acquisitions of two images of the examined sound field seeded with tracer particles, what affects the measurement of complex acoustic signals. Usefulness of the presented model is confirmed experimentally. The correction procedure, based on the proposed model, applied to measurement data increases the accuracy of acoustic particle velocity field visualization and creates new possibilities in observation of sound fields excited with multi-tonal or band-limited noise signals.

  19. On the Source of the Systematic Errors in the Quatum Mechanical Calculation of the Superheavy Elements

    Directory of Open Access Journals (Sweden)

    Khazan A.

    2010-10-01

    Full Text Available It is shown that only the hyperbolic law of the Periodic Table of Elements allows the exact calculation for the atomic masses. The reference data of Periods 8 and 9 manifest a systematic error in the computer software applied to such a calculation (this systematic error increases with the number of the elements in the Table.

  20. On the Source of the Systematic Errors in the Quantum Mechanical Calculation of the Superheavy Elements

    Directory of Open Access Journals (Sweden)

    Khazan A.

    2010-10-01

    Full Text Available It is shown that only the hyperbolic law of the Periodic Table of Elements allows the exact calculation for the atomic masses. The reference data of Periods 8 and 9 manifest a systematic error in the computer software applied to such a calculation (this systematic error increases with the number of the elements in the Table.

  1. Assessment of systematic measurement errors for acoustic travel-time tomography of the atmosphere.

    Science.gov (United States)

    Vecherin, Sergey N; Ostashev, Vladimir E; Wilson, D Keith

    2013-09-01

    Two algorithms are described for assessing systematic errors in acoustic travel-time tomography of the atmosphere, the goal of which is to reconstruct the temperature and wind velocity fields given the transducers' locations and the measured travel times of sound propagating between each speaker-microphone pair. The first algorithm aims at assessing the errors simultaneously with the mean field reconstruction. The second algorithm uses the results of the first algorithm to identify the ray paths corrupted by the systematic errors and then estimates these errors more accurately. Numerical simulations show that the first algorithm can improve the reconstruction when relatively small systematic errors are present in all paths. The second algorithm significantly improves the reconstruction when systematic errors are present in a few, but not all, ray paths. The developed algorithms were applied to experimental data obtained at the Boulder Atmospheric Observatory.

  2. Drug administration errors in hospital inpatients: a systematic review.

    Science.gov (United States)

    Berdot, Sarah; Gillaizeau, Florence; Caruba, Thibaut; Prognon, Patrice; Durieux, Pierre; Sabatier, Brigitte

    2013-01-01

    Drug administration in the hospital setting is the last barrier before a possible error reaches the patient. We aimed to analyze the prevalence and nature of administration error rate detected by the observation method. Embase, MEDLINE, Cochrane Library from 1966 to December 2011 and reference lists of included studies. Observational studies, cross-sectional studies, before-and-after studies, and randomized controlled trials that measured the rate of administration errors in inpatients were included. Two reviewers (senior pharmacists) independently identified studies for inclusion. One reviewer extracted the data; the second reviewer checked the data. The main outcome was the error rate calculated as being the number of errors without wrong time errors divided by the Total Opportunity for Errors (TOE, sum of the total number of doses ordered plus the unordered doses given), and multiplied by 100. For studies that reported it, clinical impact was reclassified into four categories from fatal to minor or no impact. Due to a large heterogeneity, results were expressed as median values (interquartile range, IQR), according to their study design. Among 2088 studies, a total of 52 reported TOE. Most of the studies were cross-sectional studies (N=46). The median error rate without wrong time errors for the cross-sectional studies using TOE was 10.5% [IQR: 7.3%-21.7%]. No fatal error was observed and most errors were classified as minor in the 18 studies in which clinical impact was analyzed. We did not find any evidence of publication bias. Administration errors are frequent among inpatients. The median error rate without wrong time errors for the cross-sectional studies using TOE was about 10%. A standardization of administration error rate using the same denominator (TOE), numerator and types of errors is essential for further publications.

  3. Drug Administration Errors in Hospital Inpatients: A Systematic Review

    Science.gov (United States)

    Berdot, Sarah; Gillaizeau, Florence; Caruba, Thibaut; Prognon, Patrice; Durieux, Pierre; Sabatier, Brigitte

    2013-01-01

    Context Drug administration in the hospital setting is the last barrier before a possible error reaches the patient. Objectives We aimed to analyze the prevalence and nature of administration error rate detected by the observation method. Data Sources Embase, MEDLINE, Cochrane Library from 1966 to December 2011 and reference lists of included studies. Study Selection Observational studies, cross-sectional studies, before-and-after studies, and randomized controlled trials that measured the rate of administration errors in inpatients were included. Data Extraction Two reviewers (senior pharmacists) independently identified studies for inclusion. One reviewer extracted the data; the second reviewer checked the data. The main outcome was the error rate calculated as being the number of errors without wrong time errors divided by the Total Opportunity for Errors (TOE, sum of the total number of doses ordered plus the unordered doses given), and multiplied by 100. For studies that reported it, clinical impact was reclassified into four categories from fatal to minor or no impact. Due to a large heterogeneity, results were expressed as median values (interquartile range, IQR), according to their study design. Results Among 2088 studies, a total of 52 reported TOE. Most of the studies were cross-sectional studies (N=46). The median error rate without wrong time errors for the cross-sectional studies using TOE was 10.5% [IQR: 7.3%-21.7%]. No fatal error was observed and most errors were classified as minor in the 18 studies in which clinical impact was analyzed. We did not find any evidence of publication bias. Conclusions Administration errors are frequent among inpatients. The median error rate without wrong time errors for the cross-sectional studies using TOE was about 10%. A standardization of administration error rate using the same denominator (TOE), numerator and types of errors is essential for further publications. PMID:23818992

  4. Drug administration errors in hospital inpatients: a systematic review.

    Directory of Open Access Journals (Sweden)

    Sarah Berdot

    Full Text Available CONTEXT: Drug administration in the hospital setting is the last barrier before a possible error reaches the patient. OBJECTIVES: We aimed to analyze the prevalence and nature of administration error rate detected by the observation method. DATA SOURCES: Embase, MEDLINE, Cochrane Library from 1966 to December 2011 and reference lists of included studies. STUDY SELECTION: Observational studies, cross-sectional studies, before-and-after studies, and randomized controlled trials that measured the rate of administration errors in inpatients were included. DATA EXTRACTION: Two reviewers (senior pharmacists independently identified studies for inclusion. One reviewer extracted the data; the second reviewer checked the data. The main outcome was the error rate calculated as being the number of errors without wrong time errors divided by the Total Opportunity for Errors (TOE, sum of the total number of doses ordered plus the unordered doses given, and multiplied by 100. For studies that reported it, clinical impact was reclassified into four categories from fatal to minor or no impact. Due to a large heterogeneity, results were expressed as median values (interquartile range, IQR, according to their study design. RESULTS: Among 2088 studies, a total of 52 reported TOE. Most of the studies were cross-sectional studies (N=46. The median error rate without wrong time errors for the cross-sectional studies using TOE was 10.5% [IQR: 7.3%-21.7%]. No fatal error was observed and most errors were classified as minor in the 18 studies in which clinical impact was analyzed. We did not find any evidence of publication bias. CONCLUSIONS: Administration errors are frequent among inpatients. The median error rate without wrong time errors for the cross-sectional studies using TOE was about 10%. A standardization of administration error rate using the same denominator (TOE, numerator and types of errors is essential for further publications.

  5. Assessment of the uncertainty associated with systematic errors in digital instruments: an experimental study on offset errors

    International Nuclear Information System (INIS)

    Attivissimo, F; Giaquinto, N; Savino, M; Cataldo, A

    2012-01-01

    This paper deals with the assessment of the uncertainty due to systematic errors, particularly in A/D conversion-based instruments. The problem of defining and assessing systematic errors is briefly discussed, and the conceptual scheme of gauge repeatability and reproducibility is adopted. A practical example regarding the evaluation of the uncertainty caused by the systematic offset error is presented. The experimental results, obtained under various ambient conditions, show that modelling the variability of systematic errors is more problematic than suggested by the ISO 5725 norm. Additionally, the paper demonstrates the substantial difference between the type B uncertainty evaluation, obtained via the maximum entropy principle applied to manufacturer's specifications, and the type A (experimental) uncertainty evaluation, which reflects actually observable reality. Although it is reasonable to assume a uniform distribution of the offset error, experiments demonstrate that the distribution is not centred and that a correction must be applied. In such a context, this work motivates a more pragmatic and experimental approach to uncertainty, with respect to the directions of supplement 1 of GUM. (paper)

  6. The systematic and random errors determination using realtime 3D surface tracking system in breast cancer

    International Nuclear Information System (INIS)

    Kanphet, J; Suriyapee, S; Sanghangthum, T; Kumkhwao, J; Wisetrintong, M; Dumrongkijudom, N

    2016-01-01

    The purpose of this study to determine the patient setup uncertainties in deep inspiration breath-hold (DIBH) radiation therapy for left breast cancer patients using real-time 3D surface tracking system. The six breast cancer patients treated by 6 MV photon beams from TrueBeam linear accelerator were selected. The patient setup errors and motion during treatment were observed and calculated for interfraction and intrafraction motions. The systematic and random errors were calculated in vertical, longitudinal and lateral directions. From 180 images tracking before and during treatment, the maximum systematic error of interfraction and intrafraction motions were 0.56 mm and 0.23 mm, the maximum random error of interfraction and intrafraction motions were 1.18 mm and 0.53 mm, respectively. The interfraction was more pronounce than the intrafraction, while the systematic error was less impact than random error. In conclusion the intrafraction motion error from patient setup uncertainty is about half of interfraction motion error, which is less impact due to the stability in organ movement from DIBH. The systematic reproducibility is also half of random error because of the high efficiency of modern linac machine that can reduce the systematic uncertainty effectively, while the random errors is uncontrollable. (paper)

  7. The quality of systematic reviews about interventions for refractive error can be improved: a review of systematic reviews.

    Science.gov (United States)

    Mayo-Wilson, Evan; Ng, Sueko Matsumura; Chuck, Roy S; Li, Tianjing

    2017-09-05

    Systematic reviews should inform American Academy of Ophthalmology (AAO) Preferred Practice Pattern® (PPP) guidelines. The quality of systematic reviews related to the forthcoming Preferred Practice Pattern® guideline (PPP) Refractive Errors & Refractive Surgery is unknown. We sought to identify reliable systematic reviews to assist the AAO Refractive Errors & Refractive Surgery PPP. Systematic reviews were eligible if they evaluated the effectiveness or safety of interventions included in the 2012 PPP Refractive Errors & Refractive Surgery. To identify potentially eligible systematic reviews, we searched the Cochrane Eyes and Vision United States Satellite database of systematic reviews. Two authors identified eligible reviews and abstracted information about the characteristics and quality of the reviews independently using the Systematic Review Data Repository. We classified systematic reviews as "reliable" when they (1) defined criteria for the selection of studies, (2) conducted comprehensive literature searches for eligible studies, (3) assessed the methodological quality (risk of bias) of the included studies, (4) used appropriate methods for meta-analyses (which we assessed only when meta-analyses were reported), (5) presented conclusions that were supported by the evidence provided in the review. We identified 124 systematic reviews related to refractive error; 39 met our eligibility criteria, of which we classified 11 to be reliable. Systematic reviews classified as unreliable did not define the criteria for selecting studies (5; 13%), did not assess methodological rigor (10; 26%), did not conduct comprehensive searches (17; 44%), or used inappropriate quantitative methods (3; 8%). The 11 reliable reviews were published between 2002 and 2016. They included 0 to 23 studies (median = 9) and analyzed 0 to 4696 participants (median = 666). Seven reliable reviews (64%) assessed surgical interventions. Most systematic reviews of interventions for

  8. Shifted Legendre method with residual error estimation for delay linear Fredholm integro-differential equations

    Directory of Open Access Journals (Sweden)

    Şuayip Yüzbaşı

    2017-03-01

    Full Text Available In this paper, we suggest a matrix method for obtaining the approximate solutions of the delay linear Fredholm integro-differential equations with constant coefficients using the shifted Legendre polynomials. The problem is considered with mixed conditions. Using the required matrix operations, the delay linear Fredholm integro-differential equation is transformed into a matrix equation. Additionally, error analysis for the method is presented using the residual function. Illustrative examples are given to demonstrate the efficiency of the method. The results obtained in this study are compared with the known results.

  9. Economic impact of medication error: a systematic review.

    Science.gov (United States)

    Walsh, Elaine K; Hansen, Christina Raae; Sahm, Laura J; Kearney, Patricia M; Doherty, Edel; Bradley, Colin P

    2017-05-01

    Medication error is a significant source of morbidity and mortality among patients. Clinical and cost-effectiveness evidence are required for the implementation of quality of care interventions. Reduction of error-related cost is a key potential benefit of interventions addressing medication error. The aim of this review was to describe and quantify the economic burden associated with medication error. PubMed, Cochrane, Embase, CINAHL, EconLit, ABI/INFORM, Business Source Complete were searched. Studies published 2004-2016 assessing the economic impact of medication error were included. Cost values were expressed in Euro 2015. A narrative synthesis was performed. A total of 4572 articles were identified from database searching, and 16 were included in the review. One study met all applicable quality criteria. Fifteen studies expressed economic impact in monetary terms. Mean cost per error per study ranged from €2.58 to €111 727.08. Healthcare costs were used to measure economic impact in 15 of the included studies with one study measuring litigation costs. Four studies included costs incurred in primary care with the remaining 12 measuring hospital costs. Five studies looked at general medication error in a general population with 11 studies reporting the economic impact of an individual type of medication error or error within a specific patient population. Considerable variability existed between studies in terms of financial cost, patients, settings and errors included. Many were of poor quality. Assessment of economic impact was conducted predominantly in the hospital setting with little assessment of primary care impact. Limited parameters were used to establish economic impact. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Multi-isocenter stereotactic radiotherapy: implications for target dose distributions of systematic and random localization errors

    International Nuclear Information System (INIS)

    Ebert, M.A.; Zavgorodni, S.F.; Kendrick, L.A.; Weston, S.; Harper, C.S.

    2001-01-01

    Purpose: This investigation examined the effect of alignment and localization errors on dose distributions in stereotactic radiotherapy (SRT) with arced circular fields. In particular, it was desired to determine the effect of systematic and random localization errors on multi-isocenter treatments. Methods and Materials: A research version of the FastPlan system from Surgical Navigation Technologies was used to generate a series of SRT plans of varying complexity. These plans were used to examine the influence of random setup errors by recalculating dose distributions with successive setup errors convolved into the off-axis ratio data tables used in the dose calculation. The influence of systematic errors was investigated by displacing isocenters from their planned positions. Results: For single-isocenter plans, it is found that the influences of setup error are strongly dependent on the size of the target volume, with minimum doses decreasing most significantly with increasing random and systematic alignment error. For multi-isocenter plans, similar variations in target dose are encountered, with this result benefiting from the conventional method of prescribing to a lower isodose value for multi-isocenter treatments relative to single-isocenter treatments. Conclusions: It is recommended that the systematic errors associated with target localization in SRT be tracked via a thorough quality assurance program, and that random setup errors be minimized by use of a sufficiently robust relocation system. These errors should also be accounted for by incorporating corrections into the treatment planning algorithm or, alternatively, by inclusion of sufficient margins in target definition

  11. IGS Rapid Orbits: Systematic Error at Day Boundaries

    National Research Council Canada - National Science Library

    Slabinski, Victor J

    2006-01-01

    When one fits a GPS spacecraft trajectory through several days of orbit positions from IGS Rapid orbit SP3 files, the orbit position residuals show discontinuities at the day boundaries between SP3 files...

  12. Tropical systematic and random error energetics based on NCEP ...

    Indian Academy of Sciences (India)

    - forecast system - A barotropic approach Part I: in physical domain ... The results suggest that generation of random error in some geographical locations is perhaps due to the inefficient description of sensible heating process in the model.

  13. Drug Administration Errors in Hospital Inpatients: A Systematic Review

    OpenAIRE

    Berdot, Sarah; Gillaizeau, Florence; Caruba, Thibaut; Prognon, Patrice; Durieux, Pierre; Sabatier, Brigitte

    2013-01-01

    CONTEXT: Drug administration in the hospital setting is the last barrier before a possible error reaches the patient. OBJECTIVES: We aimed to analyze the prevalence and nature of administration error rate detected by the observation method. DATA SOURCES: Embase, MEDLINE, Cochrane Library from 1966 to December 2011 and reference lists of included studies. STUDY SELECTION: Observational studies, cross-sectional studies, before-and-after studies, and randomized controlled trials that measured th...

  14. Tropical systematic and random error energetics based on NCEP ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    based on NCEP (MRF) analysis-forecast system –. A barotropic approach .... equations. The methodology used here has not pre- viously been applied earlier for error/predictability studies. So, as an initial work in this direction, one- month daily data are used to ... nonlinearities associated with quadratic and triple product.

  15. Analysis of possible systematic errors in the Oslo method

    International Nuclear Information System (INIS)

    Larsen, A. C.; Guttormsen, M.; Buerger, A.; Goergen, A.; Nyhus, H. T.; Rekstad, J.; Siem, S.; Toft, H. K.; Tveten, G. M.; Wikan, K.; Krticka, M.; Betak, E.; Schiller, A.; Voinov, A. V.

    2011-01-01

    In this work, we have reviewed the Oslo method, which enables the simultaneous extraction of the level density and γ-ray transmission coefficient from a set of particle-γ coincidence data. Possible errors and uncertainties have been investigated. Typical data sets from various mass regions as well as simulated data have been tested against the assumptions behind the data analysis.

  16. Tropical systematic and random error energetics based on NCEP ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    tor wind. Secondly, to remove the discrepancy gen- erated between the practical forecastability and the theoretically calculated maximum range of pre- dictability, the error in prediction depends not only on the atmospheric initial state but on other para- meters also like period of forecast, size of transient disturbances, vertical ...

  17. Medication errors in the Middle East countries: a systematic review of the literature.

    Science.gov (United States)

    Alsulami, Zayed; Conroy, Sharon; Choonara, Imti

    2013-04-01

    Medication errors are a significant global concern and can cause serious medical consequences for patients. Little is known about medication errors in Middle Eastern countries. The objectives of this systematic review were to review studies of the incidence and types of medication errors in Middle Eastern countries and to identify the main contributory factors involved. A systematic review of the literature related to medication errors in Middle Eastern countries was conducted in October 2011 using the following databases: Embase, Medline, Pubmed, the British Nursing Index and the Cumulative Index to Nursing & Allied Health Literature. The search strategy included all ages and languages. Inclusion criteria were that the studies assessed or discussed the incidence of medication errors and contributory factors to medication errors during the medication treatment process in adults or in children. Forty-five studies from 10 of the 15 Middle Eastern countries met the inclusion criteria. Nine (20 %) studies focused on medication errors in paediatric patients. Twenty-one focused on prescribing errors, 11 measured administration errors, 12 were interventional studies and one assessed transcribing errors. Dispensing and documentation errors were inadequately evaluated. Error rates varied from 7.1 % to 90.5 % for prescribing and from 9.4 % to 80 % for administration. The most common types of prescribing errors reported were incorrect dose (with an incidence rate from 0.15 % to 34.8 % of prescriptions), wrong frequency and wrong strength. Computerised physician rder entry and clinical pharmacist input were the main interventions evaluated. Poor knowledge of medicines was identified as a contributory factor for errors by both doctors (prescribers) and nurses (when administering drugs). Most studies did not assess the clinical severity of the medication errors. Studies related to medication errors in the Middle Eastern countries were relatively few in number and of poor quality

  18. Correcting systematic errors in high-sensitivity deuteron polarization measurements

    NARCIS (Netherlands)

    Brantjes, N. P. M.; Dzordzhadze, V.; Gebel, R.; Gonnella, F.; Gray, F. E.; van der Hoek, D. J.; Imig, A.; Kruithof, W. L.; Lazarus, D. M.; Lehrach, A.; Lorentz, B.; Messi, R.; Moricciani, D.; Morse, W. M.; Noid, G. A.; Onderwater, C. J. G.; Ozben, C. S.; Prasuhn, D.; Sandri, P. Levi; Semertzidis, Y. K.; da Silva e Silva, M.; Stephenson, E. J.; Stockhorst, H.; Venanzoni, G.; Versolato, O. O.

    2012-01-01

    This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY

  19. Tolerable systematic errors in Really Large Hadron Collider dipoles

    International Nuclear Information System (INIS)

    Peggs, S.; Dell, F.

    1996-01-01

    Maximum allowable systematic harmonics for arc dipoles in a Really Large Hadron Collider are derived. The possibility of half cell lengths much greater than 100 meters is justified. A convenient analytical model evaluating horizontal tune shifts is developed, and tested against a sample high field collider

  20. Investigation of systematic errors and estimation of $\\pi K$ atom lifetime

    CERN Document Server

    Yazkov, Valeriy

    2013-01-01

    This note describes details of analysis of data sample collected by DIRAC experiment on Ni target in 2008-2010 in order to estimate lifetime of $\\pi K$ atoms. Experimental results consists of six distinct data samples: both charge combinations ($\\pi^+K^−$ and $K^+\\pi^−$ atoms) obtained in dierent experimental conditions corresponding to each year of data-taking. Sources of systematic errors are analyzed, and estimations of systematic errors are presented. Taking into account both statistical and systematic uncertainties, the lifetime of $\\pi K$ atoms is estimated by maximum likelihood method.

  1. Systematic Error Study for ALICE charged-jet v2 Measurement

    Energy Technology Data Exchange (ETDEWEB)

    Heinz, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Soltz, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-07-18

    We study the treatment of systematic errors in the determination of v2 for charged jets in √ sNN = 2:76 TeV Pb-Pb collisions by the ALICE Collaboration. Working with the reported values and errors for the 0-5% centrality data we evaluate the Χ2 according to the formulas given for the statistical and systematic errors, where the latter are separated into correlated and shape contributions. We reproduce both the Χ2 and p-values relative to a null (zero) result. We then re-cast the systematic errors into an equivalent co-variance matrix and obtain identical results, demonstrating that the two methods are equivalent.

  2. Saccades to remembered target locations: an analysis of systematic and variable errors.

    Science.gov (United States)

    White, J M; Sparks, D L; Stanford, T R

    1994-01-01

    We studied the effects of varying delay interval on the accuracy and velocity of saccades to the remembered locations of visual targets. Remembered saccades were less accurate than control saccades. Both systematic and variable errors contributed to the loss of accuracy. Systematic errors were similar in size for delay intervals ranging from 400 msec to 5.6 sec, but variable errors increased monotonically as delay intervals were lengthened. Compared to control saccades, remembered saccades were slower and the peak velocities were more variable. However, neither peak velocity nor variability in peak velocity was related to the duration of the delay interval. Our findings indicate that a memory-related process is not the major source of the systematic errors observed on memory trials.

  3. Estimating angle-dependent systematic error and measurement uncertainty for a conoscopic holography measurement system

    Science.gov (United States)

    Paviotti, Anna; Carmignato, Simone; Voltan, Alessandro; Laurenti, Nicola; Cortelazzo, Guido M.

    2009-01-01

    The aim of this study is to assess angle-dependent systematic errors and measurement uncertainties for a conoscopic holography laser sensor mounted on a Coordinate Measuring Machine (CMM). The main contribution of our work is the definition of a methodology for the derivation of point-sensitive systematic and random errors, which must be determined in order to evaluate the accuracy of the measuring system. An ad hoc three dimensional artefact has been built for the task. The experimental test has been designed so as to isolate the effects of angular variations from those of other influence quantities that might affect the measurement result. We have found the best measurand to assess angle-dependent errors, and found some preliminary results on the expression of the systematic error and measurement uncertainty as a function of the zenith angle for the chosen measurement system and sample material.

  4. MAXIMUM LIKELIHOOD ANALYSIS OF SYSTEMATIC ERRORS IN INTERFEROMETRIC OBSERVATIONS OF THE COSMIC MICROWAVE BACKGROUND

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Le; Timbie, Peter [Department of Physics, University of Wisconsin, Madison, WI 53706 (United States); Karakci, Ata; Korotkov, Andrei; Tucker, Gregory S. [Department of Physics, Brown University, 182 Hope Street, Providence, RI 02912 (United States); Sutter, Paul M.; Wandelt, Benjamin D. [Department of Physics, 1110 W Green Street, University of Illinois at Urbana-Champaign, Urbana, IL 61801 (United States); Bunn, Emory F., E-mail: lzhang263@wisc.edu [Physics Department, University of Richmond, Richmond, VA 23173 (United States)

    2013-06-01

    We investigate the impact of instrumental systematic errors in interferometric measurements of the cosmic microwave background (CMB) temperature and polarization power spectra. We simulate interferometric CMB observations to generate mock visibilities and estimate power spectra using the statistically optimal maximum likelihood technique. We define a quadratic error measure to determine allowable levels of systematic error that does not induce power spectrum errors beyond a given tolerance. As an example, in this study we focus on differential pointing errors. The effects of other systematics can be simulated by this pipeline in a straightforward manner. We find that, in order to accurately recover the underlying B-modes for r = 0.01 at 28 < l < 384, Gaussian-distributed pointing errors must be controlled to 0. Degree-Sign 7 root mean square for an interferometer with an antenna configuration similar to QUBIC, in agreement with analytical estimates. Only the statistical uncertainty for 28 < l < 88 would be changed at {approx}10% level. With the same instrumental configuration, we find that the pointing errors would slightly bias the 2{sigma} upper limit of the tensor-to-scalar ratio r by {approx}10%. We also show that the impact of pointing errors on the TB and EB measurements is negligibly small.

  5. Using ridge regression in systematic pointing error corrections

    Science.gov (United States)

    Guiar, C. N.

    1988-01-01

    A pointing error model is used in the antenna calibration process. Data from spacecraft or radio star observations are used to determine the parameters in the model. However, the regression variables are not truly independent, displaying a condition known as multicollinearity. Ridge regression, a biased estimation technique, is used to combat the multicollinearity problem. Two data sets pertaining to Voyager 1 spacecraft tracking (days 105 and 106 of 1987) were analyzed using both linear least squares and ridge regression methods. The advantages and limitations of employing the technique are presented. The problem is not yet fully resolved.

  6. Residual set-up errors and margins in on-line image-guided prostate localization in radiotherapy

    DEFF Research Database (Denmark)

    Poulsen, Per Rugaard; Muren, Ludvig; Høyer, Morten

    2007-01-01

    BACKGROUND AND PURPOSE: Image-guided on-line correction of the target position allows radiotherapy of prostate cancer with narrow set-up margins. The present study investigated the residual set-up error after on-line prostate localization and its impact on margins. MATERIALS AND METHODS: Prostate...... localization based on two orthogonal X-ray images of gold markers implanted in the prostate was performed with an on-board imager at four treatment sessions for 90 patients. The set-up error in the sagittal plane residual after couch adjustment was evaluated on lateral verification portal images. RESULTS...

  7. A residual-based a posteriori error estimator for single-phase Darcy flow in fractured porous media

    KAUST Repository

    Chen, Huangxin

    2016-12-09

    In this paper we develop an a posteriori error estimator for a mixed finite element method for single-phase Darcy flow in a two-dimensional fractured porous media. The discrete fracture model is applied to model the fractures by one-dimensional fractures in a two-dimensional domain. We consider Raviart–Thomas mixed finite element method for the approximation of the coupled Darcy flows in the fractures and the surrounding porous media. We derive a robust residual-based a posteriori error estimator for the problem with non-intersecting fractures. The reliability and efficiency of the a posteriori error estimator are established for the error measured in an energy norm. Numerical results verifying the robustness of the proposed a posteriori error estimator are given. Moreover, our numerical results indicate that the a posteriori error estimator also works well for the problem with intersecting fractures.

  8. Impact of radar systematic error on the orthogonal frequency division multiplexing chirp waveform orthogonality

    Science.gov (United States)

    Wang, Jie; Liang, Xingdong; Chen, Longyong; Ding, Chibiao

    2015-01-01

    Orthogonal frequency division multiplexing (OFDM) chirp waveform, which is composed of two successive identical linear frequency modulated subpulses, is a newly proposed orthogonal waveform scheme for multiinput multioutput synthetic aperture radar (SAR) systems. However, according to the waveform model, radar systematic error, which introduces phase or amplitude difference between the subpulses of the OFDM waveform, significantly degrades the orthogonality. The impact of radar systematic error on the waveform orthogonality is mainly caused by the systematic nonlinearity rather than the thermal noise or the frequency-dependent systematic error. Due to the influence of the causal filters, the first subpulse leaks into the second one. The leaked signal interacts with the second subpulse in the nonlinear components of the transmitter. This interaction renders a dramatic phase distortion in the beginning of the second subpulse. The resultant distortion, which leads to a phase difference between the subpulses, seriously damages the waveform's orthogonality. The impact of radar systematic error on the waveform orthogonality is addressed. Moreover, the impact of the systematic nonlinearity on the waveform is avoided by adding a standby between the subpulses. Theoretical analysis is validated by practical experiments based on a C-band SAR system.

  9. Impact of systematic errors on DVH parameters of different OAR and target volumes in Intracavitary Brachytherapy (ICBT)

    International Nuclear Information System (INIS)

    Mourya, Ankur; Singh, Gaganpreet; Kumar, Vivek; Oinam, Arun S.

    2016-01-01

    Aim of this study is to analyze the impact of systematic errors on DVH parameters of different OAR and Target volumes in intracavitary brachytherapy (ICBT). To quantify the changes in dose-volume histogram parameters due to systematic errors in applicator reconstruction of brachytherapy planning, known errors in catheter reconstructions have to be introduced in applicator coordinate system

  10. Maximum Likelihood Analysis of Systematic Errors in Interferometric Observations of the Cosmic Microwave Background

    Science.gov (United States)

    Zhang, Le; Karakci, Ata; Sutter, Paul M.; Bunn, Emory F.; Korotkov, Andrei; Timbie, Peter; Tucker, Gregory S.; Wandelt, Benjamin D.

    2013-06-01

    We investigate the impact of instrumental systematic errors in interferometric measurements of the cosmic microwave background (CMB) temperature and polarization power spectra. We simulate interferometric CMB observations to generate mock visibilities and estimate power spectra using the statistically optimal maximum likelihood technique. We define a quadratic error measure to determine allowable levels of systematic error that does not induce power spectrum errors beyond a given tolerance. As an example, in this study we focus on differential pointing errors. The effects of other systematics can be simulated by this pipeline in a straightforward manner. We find that, in order to accurately recover the underlying B-modes for r = 0.01 at 28 QUBIC, in agreement with analytical estimates. Only the statistical uncertainty for 28 < l < 88 would be changed at ~10% level. With the same instrumental configuration, we find that the pointing errors would slightly bias the 2σ upper limit of the tensor-to-scalar ratio r by ~10%. We also show that the impact of pointing errors on the TB and EB measurements is negligibly small.

  11. On systematic and statistic errors in radionuclide mass activity estimation procedure

    International Nuclear Information System (INIS)

    Smelcerovic, M.; Djuric, G.; Popovic, D.

    1989-01-01

    One of the most important requirements during nuclear accidents is the fast estimation of the mass activity of the radionuclides that suddenly and without control reach the environment. The paper points to systematic errors in the procedures of sampling, sample preparation and measurement itself, that in high degree contribute to total mass activity evaluation error. Statistic errors in gamma spectrometry as well as in total mass alpha and beta activity evaluation are also discussed. Beside, some of the possible sources of errors in the partial mass activity evaluation for some of the radionuclides are presented. The contribution of the errors in the total mass activity evaluation error is estimated and procedures that could possibly reduce it are discussed (author)

  12. Organochlorine pesticides residue in breast milk: a systematic review.

    Science.gov (United States)

    Pirsaheb, Meghdad; Limoee, Mojtaba; Namdari, Farideh; Khamutian, Razieh

    2015-01-01

    Chlorinated pesticides have been used in pest control for several decades in the world. These compounds are still applied in many regions, and their continuous usage has resulted in their bioaccumulation and residue in the food chain. These residues could transfer to food products and accumulate in fat tissues. Undoubtedly, the breast milk could be a significant biomarker for estimation of these residues in the human body. This study was conducted to review and compile the results of the studies undertaken in the world which surveyed the organochlorine pesticides residue in breast milk. A total of 710 national and international articles and texts related to the focused subject were extracted from the virtual databases using the following key words: Chlorinated pesticides, residue and breast milk. Thirty articles published between 1980 and 2013 were selected and reviewed. The majority of the reviewed articles indicated the presence of two or more organochlorine pesticides in the collected samples of breast milk. Based on the reviewed studies, dichlorodiphenyltrichloroethane (DDT) had the highest level of concentration in the collected samples of breast milk. Moreover, there was a statistically significant positive correlation between mother's age, her multiparty and concentration of chlorinated pesticides in breast milk. The organochlorine pesticides are still applied in some developing countries including some regions of Iran. Thus, it seems essential to inform the community about the adverse effects of this class of pesticides; and most importantly the governments should also ban the use of such compounds.

  13. Genetic properties of residual feed intakes for maintenance and growth and the implications of error measurement.

    Science.gov (United States)

    Rekaya, R; Aggrey, S E

    2015-03-01

    A procedure for estimating residual feed intake (RFI) based on information used in feeding studies is presented. Koch's classical model consists of using fixed regressions of feed intake on metabolic BW and growth, and RFI is obtained as the deviation between the observed feed intake and the expected intake for an individual with a given weight and growth rate. Estimated RFI following such a procedure intrinsically suffers from the inability to separate true RFI from the sampling error. As the latter is never equal to 0, estimated RFI is always biased, and the magnitude of such bias depends on the ratio between the true RFI variance and the residual variance. Additionally, the classical approach suffers from its inability to dissect RFI into its biological components, being the metabolic efficiency (maintaining BW) and growth efficiency. To remedy these problems we proposed a procedure that directly models the individual animal variation in feed efficiency used for body maintenance and growth. The proposed model is an extension of Koch's procedure by assuming animal-specific regression coefficients rather than population-level parameters. To evaluate the performance of both models, a data simulation was performed using the structure of an existing chicken data set consisting of 2,289 records. Data was simulated using 4 ratios between the true RFI and sampling error variances (1:1, 2:1, 4:1, and 10:1) and 5 correlation values between the 2 animal-specific random regression coefficients (-0.95, -0.5, 0, 0.5, and 0.95). The results clearly showed the superiority of the proposed model compared to Koch's procedure under all 20 simulation scenarios. In fact, when the ratio was 1:1 and the true genetic correlation was equal to -0.95, the correlation between the true and estimated RFI for animals in the top 20% was 0.60 and 0.51 for the proposed and Koch's models, respectively. This is an 18% superiority for the proposed model. For the bottom 20% of animals in the ranking

  14. Effects of averaging over motion and the resulting systematic errors in radiation therapy

    International Nuclear Information System (INIS)

    Evans, Philip M; Coolens, Catherine; Nioutsikou, Elena

    2006-01-01

    The potential for systematic errors in radiotherapy of a breathing patient is considered using the statistical model of Bortfeld et al (2002 Phys. Med. Biol. 47 2203-20). It is shown that although averaging over 30 fractions does result in a narrow Gaussian distribution of errors, as predicted by the central limit theorem, the fact that one or a few samples of the breathing patient's motion distribution are used for treatment planning (in contrast to the many treatment fractions that are likely to be delivered) may result in a much larger error with a systematic component. The error distribution may be particularly large if a scan at breath-hold is used for planning. (note)

  15. Design of roundness measurement model with multi-systematic error for cylindrical components with large radius.

    Science.gov (United States)

    Sun, Chuanzhi; Wang, Lei; Tan, Jiubin; Zhao, Bo; Tang, Yangchao

    2016-02-01

    The paper designs a roundness measurement model with multi-systematic error, which takes eccentricity, probe offset, radius of tip head of probe, and tilt error into account for roundness measurement of cylindrical components. The effects of the systematic errors and radius of components are analysed in the roundness measurement. The proposed method is built on the instrument with a high precision rotating spindle. The effectiveness of the proposed method is verified by experiment with the standard cylindrical component, which is measured on a roundness measuring machine. Compared to the traditional limacon measurement model, the accuracy of roundness measurement can be increased by about 2.2 μm using the proposed roundness measurement model for the object with a large radius of around 37 mm. The proposed method can improve the accuracy of roundness measurement and can be used for error separation, calibration, and comparison, especially for cylindrical components with a large radius.

  16. Dynamically correcting two-qubit gates against any systematic logical error

    Science.gov (United States)

    Calderon Vargas, Fernando Antonio

    The reliability of quantum information processing depends on the ability to deal with noise and error in an efficient way. A significant source of error in many settings is coherent, systematic gate error. This work introduces a set of composite pulse sequences that generate maximally entangling gates and correct all systematic errors within the logical subspace to arbitrary order. These sequences are applica- ble for any two-qubit interaction Hamiltonian, and make no assumptions about the underlying noise mechanism except that it is constant on the timescale of the opera- tion. The prime use for our results will be in cases where one has limited knowledge of the underlying physical noise and control mechanisms, highly constrained control, or both. In particular, we apply these composite pulse sequences to the quantum system formed by two capacitively coupled singlet-triplet qubits, which is charac- terized by having constrained control and noise sources that are low frequency and of a non-Markovian nature.

  17. Complete Systematic Error Model of SSR for Sensor Registration in ATC Surveillance Networks.

    Science.gov (United States)

    Jarama, Ángel J; López-Araquistain, Jaime; Miguel, Gonzalo de; Besada, Juan A

    2017-09-21

    In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases) is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature). It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation.

  18. Systematic error mitigation in multi-GNSS positioning based on semiparametric estimation

    Science.gov (United States)

    Yu, Wenkun; Ding, Xiaoli; Dai, Wujiao; Chen, Wu

    2017-12-01

    Joint use of observations from multiple global navigation satellite systems (GNSS) is advantageous in high-accuracy positioning. However, systematic errors in the observations can significantly impact on the positioning accuracy if such errors cannot be properly mitigated. The errors can distort least squares estimations and also affect the results of variance component estimation that is frequently used to determine the stochastic model when observations from multiple GNSS are used. We present an approach that is based on the concept of semiparametric estimation for mitigating the effects of the systematic errors. Experimental results based on both simulated and real GNSS datasets show that the approach is effective, especially when applied before carrying out variance component estimation.

  19. Potential effects of systematic errors in intraocular pressure measurements on screening for ocular hypertension.

    Science.gov (United States)

    Turner, M J; Graham, S L; Avolio, A P; Mitchell, P

    2013-04-01

    Raised intraocular pressure (IOP) increases the risk of glaucoma. Eye-care professionals measure IOP to screen for ocular hypertension (OHT) (IOP>21 mm Hg) and to monitor glaucoma treatment. Tonometers commonly develop significant systematic measurement errors within months of calibration, and may not be verified often enough. There is no published evidence indicating how accurate tonometers should be. We analysed IOP measurements from a population study to estimate the sensitivity of detection of OHT to systematic errors in IOP measurements. We analysed IOP data from 3654 participants in the Blue Mountains Eye Study, Australia. An inverse cumulative distribution indicating the proportion of individuals with highest IOP>21 mm Hg was calculated. A second-order polynomial was fitted to the distribution and used to calculate over- and under-detection of OHT that would be caused by systematic measurement errors between -4 and +4 mm Hg. We calculated changes in the apparent prevalence of OHT caused by systematic errors in IOP. A tonometer that consistently under- or over-reads by 1 mm Hg will miss 34% of individuals with OHT, or yield 58% more positive screening tests, respectively. Tonometers with systematic errors of -4 and +4 mm Hg would miss 76% of individuals with OHT and would over-detect OHT by a factor of seven. Over- and under-detection of OHT are not strongly affected by cutoff IOP. We conclude that tonometers should be maintained and verified at intervals short enough to control systematic errors in IOP measurements to substantially less than 1 mm Hg.

  20. Unaccounted source of systematic errors in measurements of the Newtonian gravitational constant G

    Science.gov (United States)

    DeSalvo, Riccardo

    2015-06-01

    Many precision measurements of G have produced a spread of results incompatible with measurement errors. Clearly an unknown source of systematic errors is at work. It is proposed here that most of the discrepancies derive from subtle deviations from Hooke's law, caused by avalanches of entangled dislocations. The idea is supported by deviations from linearity reported by experimenters measuring G, similarly to what is observed, on a larger scale, in low-frequency spring oscillators. Some mitigating experimental apparatus modifications are suggested.

  1. ERESYE - a expert system for the evaluation of uncertainties related to systematic experimental errors

    International Nuclear Information System (INIS)

    Martinelli, T.; Panini, G.C.; Amoroso, A.

    1989-11-01

    Information about systematic errors are not given In EXFOR, the data base of nuclear experimental measurements: their assessment is committed to the ability of the evaluator. A tool Is needed which performs this task in a fully automatic way or, at least, gives a valuable aid. The expert system ERESYE has been implemented for investigating the feasibility of an automatic evaluation of the systematic errors in the experiments. The features of the project which led to the implementation of the system are presented. (author)

  2. Interventions to reduce medication errors in neonatal care: a systematic review.

    Science.gov (United States)

    Nguyen, Minh-Nha Rhylie; Mosel, Cassandra; Grzeskowiak, Luke E

    2018-02-01

    Medication errors represent a significant but often preventable cause of morbidity and mortality in neonates. The objective of this systematic review was to determine the effectiveness of interventions to reduce neonatal medication errors. A systematic review was undertaken of all comparative and noncomparative studies published in any language, identified from searches of PubMed and EMBASE and reference-list checking. Eligible studies were those investigating the impact of any medication safety interventions aimed at reducing medication errors in neonates in the hospital setting. A total of 102 studies were identified that met the inclusion criteria, including 86 comparative and 16 noncomparative studies. Medication safety interventions were classified into six themes: technology ( n = 38; e.g. electronic prescribing), organizational ( n = 16; e.g. guidelines, policies, and procedures), personnel ( n = 13; e.g. staff education), pharmacy ( n = 9; e.g. clinical pharmacy service), hazard and risk analysis ( n = 8; e.g. error detection tools), and multifactorial ( n = 18; e.g. any combination of previous interventions). Significant variability was evident across all included studies, with differences in intervention strategies, trial methods, types of medication errors evaluated, and how medication errors were identified and evaluated. Most studies demonstrated an appreciable risk of bias. The vast majority of studies (>90%) demonstrated a reduction in medication errors. A similar median reduction of 50-70% in medication errors was evident across studies included within each of the identified themes, but findings varied considerably from a 16% increase in medication errors to a 100% reduction in medication errors. While neonatal medication errors can be reduced through multiple interventions aimed at improving the medication use process, no single intervention appeared clearly superior. Further research is required to evaluate the relative cost-effectiveness of the

  3. Systematic Magnus-Based Approach for Suppressing Leakage and Nonadiabatic Errors in Quantum Dynamics

    Science.gov (United States)

    Ribeiro, Hugo; Baksic, Alexandre; Clerk, Aashish A.

    2017-01-01

    We present a systematic, perturbative method for correcting quantum gates to suppress errors that take the target system out of a chosen subspace. Our method addresses the generic problem of nonadiabatic errors in adiabatic evolution and state preparation, as well as general leakage errors due to spurious couplings to undesirable states. The method is based on the Magnus expansion: By correcting control pulses, we modify the Magnus expansion of an initially given, imperfect unitary in such a way that the desired evolution is obtained. Applications to adiabatic quantum state transfer, superconducting qubits, and generalized Landau-Zener problems are discussed.

  4. Electronic portal image assisted reduction of systematic set-up errors in head and neck irradiation

    International Nuclear Information System (INIS)

    Boer, Hans C.J. de; Soernsen de Koste, John R. van; Creutzberg, Carien L.; Visser, Andries G.; Levendag, Peter C.; Heijmen, Ben J.M.

    2001-01-01

    Purpose: To quantify systematic and random patient set-up errors in head and neck irradiation and to investigate the impact of an off-line correction protocol on the systematic errors. Material and methods: Electronic portal images were obtained for 31 patients treated for primary supra-glottic larynx carcinoma who were immobilised using a polyvinyl chloride cast. The observed patient set-up errors were input to the shrinking action level (SAL) off-line decision protocol and appropriate set-up corrections were applied. To assess the impact of the protocol, the positioning accuracy without application of set-up corrections was reconstructed. Results: The set-up errors obtained without set-up corrections (1 standard deviation (SD)=1.5-2 mm for random and systematic errors) were comparable to those reported in other studies on similar fixation devices. On an average, six fractions per patient were imaged and the set-up of half the patients was changed due to the decision protocol. Most changes were detected during weekly check measurements, not during the first days of treatment. The application of the SAL protocol reduced the width of the distribution of systematic errors to 1 mm (1 SD), as expected from simulations. A retrospective analysis showed that this accuracy should be attainable with only two measurements per patient using a different off-line correction protocol, which does not apply action levels. Conclusions: Off-line verification protocols can be particularly effective in head and neck patients due to the smallness of the random set-up errors. The excellent set-up reproducibility that can be achieved with such protocols enables accurate dose delivery in conformal treatments

  5. Analysis of the Systematic Errors Found in the Kipp & Zonen Large-Aperture Scintillometer

    NARCIS (Netherlands)

    Kesteren, van A.J.H.; Hartogensis, O.K.

    2011-01-01

    Studies have shown a systematic error in the Kipp & Zonen large-aperture scintillometer (K&ZLAS) measurements of the sensible heat flux, H. We improved on these studies and compared four K&ZLASs with a Wageningen large-aperture scintillometer at the Chilbolton Observatory. The

  6. The systematic error of temperature noise correlation measurement method and self-calibration

    International Nuclear Information System (INIS)

    Tian Hong; Tong Yunxian

    1993-04-01

    The turbulent transport behavior of fluid noise and the nature of noise affect on the velocity measurement system have been studied. The systematic error of velocity measurement system is analyzed. A theoretical calibration method is proposed, which makes the velocity measurement of time-correlation as an absolute measurement method. The theoretical results are in good agreement with experiments

  7. The reliability and measurement error of protractor-based goniometry of the fingers: A systematic review

    NARCIS (Netherlands)

    Kooij, Y.E. van; Fink, A.; Nijhuis-Van der Sanden, M.W.; Speksnijder, C.M.

    2017-01-01

    STUDY DESIGN: Systematic review PURPOSE OF THE STUDY: The purpose was to review the available literature for evidence on the reliability and measurement error of protractor-based goniometry assessment of the finger joints. METHODS: Databases were searched for articles with key words "hand,"

  8. The reliability and measurement error of protractor-based goniometry of the fingers : A systematic review

    NARCIS (Netherlands)

    van Kooij, Yara E.; Fink, Alexandra; Nijhuis-van der Sanden, Maria W.; Speksnijder, Caroline M.|info:eu-repo/dai/nl/304821535

    2017-01-01

    Study Design: Systematic review. Purpose of the Study: The purpose was to review the available literature for evidence on the reliability and measurement error of protractor-based goniometry assessment of the finger joints. Methods: Databases were searched for articles with key words "hand,"

  9. End-point construction and systematic titration error in linear titration curves-complexation reactions

    NARCIS (Netherlands)

    Coenegracht, P.M.J.; Duisenberg, A.J.M.

    The systematic titration error which is introduced by the intersection of tangents to hyperbolic titration curves is discussed. The effects of the apparent (conditional) formation constant, of the concentration of the unknown component and of the ranges used for the end-point construction are

  10. Comparison of two stochastic techniques for reliable urban runoff prediction by modeling systematic errors

    DEFF Research Database (Denmark)

    Del Giudice, Dario; Löwe, Roland; Madsen, Henrik

    2015-01-01

    In urban rainfall-runoff, commonly applied statistical techniques for uncertainty quantification mostly ignore systematic output errors originating from simplified models and erroneous inputs. Consequently, the resulting predictive uncertainty is often unreliable. Our objective is to present two ....... These properties make it more suitable for off-line applications. The IND can help in diagnosing the causes of output errors and is computationally inexpensive. It produces best results on short forecast horizons that are typical for online applications.......In urban rainfall-runoff, commonly applied statistical techniques for uncertainty quantification mostly ignore systematic output errors originating from simplified models and erroneous inputs. Consequently, the resulting predictive uncertainty is often unreliable. Our objective is to present two...

  11. Random and systematic beam modulator errors in dynamic intensity modulated radiotherapy

    International Nuclear Information System (INIS)

    Parsai, Homayon; Cho, Paul S; Phillips, Mark H; Giansiracusa, Robert S; Axen, David

    2003-01-01

    This paper reports on the dosimetric effects of random and systematic modulator errors in delivery of dynamic intensity modulated beams. A sliding-widow type delivery that utilizes a combination of multileaf collimators (MLCs) and backup diaphragms was examined. Gaussian functions with standard deviations ranging from 0.5 to 1.5 mm were used to simulate random positioning errors. A clinical example involving a clival meningioma was chosen with optic chiasm and brain stem as limiting critical structures in the vicinity of the tumour. Dose calculations for different modulator fluctuations were performed, and a quantitative analysis was carried out based on cumulative and differential dose volume histograms for the gross target volume and surrounding critical structures. The study indicated that random modulator errors have a strong tendency to reduce minimum target dose and homogeneity. Furthermore, it was shown that random perturbation of both MLCs and backup diaphragms in the order of σ = 1 mm can lead to 5% errors in prescribed dose. In comparison, when MLCs or backup diaphragms alone was perturbed, the system was more robust and modulator errors of at least σ = 1.5 mm were required to cause dose discrepancies greater than 5%. For systematic perturbation, even errors in the order of ±0.5 mm were shown to result in significant dosimetric deviations

  12. Random and systematic beam modulator errors in dynamic intensity modulated radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Parsai, Homayon [Department of Radiation Oncology, University of Washington, Box 356043, Seattle, WA 98195 (United States); Cho, Paul S [Department of Radiation Oncology, University of Washington, Box 356043, Seattle, WA 98195 (United States); Phillips, Mark H [Department of Radiation Oncology, University of Washington, Box 356043, Seattle, WA 98195 (United States); Giansiracusa, Robert S [Department of Radiation Oncology, University of Washington, Box 356043, Seattle, WA 98195 (United States); Axen, David [Department of Physics and Astronomy, University of British Columbia, Vancouver, BC, V6T 1Z1 (Canada)

    2003-05-07

    This paper reports on the dosimetric effects of random and systematic modulator errors in delivery of dynamic intensity modulated beams. A sliding-widow type delivery that utilizes a combination of multileaf collimators (MLCs) and backup diaphragms was examined. Gaussian functions with standard deviations ranging from 0.5 to 1.5 mm were used to simulate random positioning errors. A clinical example involving a clival meningioma was chosen with optic chiasm and brain stem as limiting critical structures in the vicinity of the tumour. Dose calculations for different modulator fluctuations were performed, and a quantitative analysis was carried out based on cumulative and differential dose volume histograms for the gross target volume and surrounding critical structures. The study indicated that random modulator errors have a strong tendency to reduce minimum target dose and homogeneity. Furthermore, it was shown that random perturbation of both MLCs and backup diaphragms in the order of {sigma} = 1 mm can lead to 5% errors in prescribed dose. In comparison, when MLCs or backup diaphragms alone was perturbed, the system was more robust and modulator errors of at least {sigma} = 1.5 mm were required to cause dose discrepancies greater than 5%. For systematic perturbation, even errors in the order of {+-}0.5 mm were shown to result in significant dosimetric deviations.

  13. Global CO2 flux inversions from remote-sensing data with systematic errors using hierarchical statistical models

    Science.gov (United States)

    Zammit-Mangion, Andrew; Stavert, Ann; Rigby, Matthew; Ganesan, Anita; Rayner, Peter; Cressie, Noel

    2017-04-01

    The Orbiting Carbon Observatory-2 (OCO-2) satellite was launched on 2 July 2014, and it has been a source of atmospheric CO2 data since September 2014. The OCO-2 dataset contains a number of variables, but the one of most interest for flux inversion has been the column-averaged dry-air mole fraction (in units of ppm). These global level-2 data offer the possibility of inferring CO2 fluxes at Earth's surface and tracking those fluxes over time. However, as well as having a component of random error, the OCO-2 data have a component of systematic error that is dependent on the instrument's mode, namely land nadir, land glint, and ocean glint. Our statistical approach to CO2-flux inversion starts with constructing a statistical model for the random and systematic errors with parameters that can be estimated from the OCO-2 data and possibly in situ sources from flasks, towers, and the Total Column Carbon Observing Network (TCCON). Dimension reduction of the flux field is achieved through the use of physical basis functions, while temporal evolution of the flux is captured by modelling the basis-function coefficients as a vector autoregressive process. For computational efficiency, flux inversion uses only three months of sensitivities of mole fraction to changes in flux, computed using MOZART; any residual variation is captured through the modelling of a stochastic process that varies smoothly as a function of latitude. The second stage of our statistical approach is to simulate from the posterior distribution of the basis-function coefficients and all unknown parameters given the data using a fully Bayesian Markov chain Monte Carlo (MCMC) algorithm. Estimates and posterior variances of the flux field can then be obtained straightforwardly from this distribution. Our statistical approach is different than others, as it simultaneously makes inference (and quantifies uncertainty) on both the error components' parameters and the CO2 fluxes. We compare it to more classical

  14. Assessment of residual error in liver position using kV cone-beam computed tomography for liver cancer high-precision radiation therapy

    International Nuclear Information System (INIS)

    Hawkins, Maria A.; Brock, Kristy K.; Eccles, Cynthia; Moseley, Douglas; Jaffray, David; Dawson, Laura A.

    2006-01-01

    Purpose: To evaluate the residual error in liver position using breath-hold kilovoltage (kV) cone-beam computed tomography (CT) following on-line orthogonal megavoltage (MV) image-guided breath-hold liver cancer conformal radiotherapy. Methods and Materials: Thirteen patients with liver cancer treated with 6-fraction breath-hold conformal radiotherapy were investigated. Before each fraction, orthogonal MV images were obtained during exhale breath-hold, with repositioning for offsets >3 mm, using the diaphragm for cranio-caudal (CC) alignment and vertebral bodies for medial-lateral (ML) and anterior posterior (AP) alignment. After repositioning, repeat orthogonal MV images, orthogonal kV fluoroscopic movies, and kV cone-beam CTs were obtained in exhale breath-hold. The cone-beam CT livers were registered to the planning CT liver to obtain the residual setup error in liver position. Results: After repositioning, 78 orthogonal MV image pairs, 61 orthogonal kV image pairs, and 72 kV cone-beam CT scans were obtained. Population random setup errors (σ) in liver position were 2.7 mm (CC), 2.3 mm (ML), and 3.0 mm (AP), and systematic errors (Σ) were 1.1 mm, 1.9 mm, and 1.3 mm in the superior, medial, and posterior directions. Liver offsets >5 mm were observed in 33% of cases; offsets >10 mm and liver deformation >5 mm were observed in a minority of patients. Conclusions: Liver position after radiation therapy guided with MV orthogonal imaging was within 5 mm of planned position in the majority of patients. kV cone-beam CT image guidance should improve accuracy with reduced dose compared with orthogonal MV image guidance for liver cancer radiation therapy

  15. Effects of Systematic Error Correction and Repeated Readings on the Reading Accuracy and Proficiency of Second Graders with Disabilities

    Science.gov (United States)

    Nelson, Janet S.; Alber, Sheila R.; Gordy, Alicia

    2004-01-01

    This investigation used a multiple-baseline design to examine the effects of systematic error correction and of systematic error correction with repeated readings on the reading accuracy and fluency of four second-graders receiving special education services in a resource room. Three of the students were identified as having learning disabilities,…

  16. Prevalence of computerized physician order entry systems-related medication prescription errors: A systematic review.

    Science.gov (United States)

    Korb-Savoldelli, Virginie; Boussadi, Abdelali; Durieux, Pierre; Sabatier, Brigitte

    2018-03-01

    The positive impact of computerized physician order entry (CPOE) systems on prescription safety must be considered in light of the persistence of certain types of medication-prescription errors. We performed a systematic review, based on the PRISMA statement, to analyze the prevalence of prescription errors related to the use of CPOE systems. We searched MEDLINE, EMBASE, CENTRAL, DBLP, the International Clinical Trials Registry, the ISI Web of Science, and reference lists of relevant articles from March 1982 to August 2017. We included original peer-reviewed studies which quantitatively reported medication-prescription errors related to CPOE. We analyzed the prevalence of medication-prescription errors according to an adapted version of the National Coordinating Council for Medication Error Reporting and Prevention (NCCMERP) taxonomy and assessed the mechanisms responsible for each type of prescription error due to CPOE. Fourteen studies were included. The prevalence of CPOE systems-related medication errors relative to all prescription medication errors ranged from 6.1 to 77.7% (median = 26.1% [IQR:17.6-42,1]) and was less than 6.3% relative to the number of prescriptions reviewed. All studies reported "wrong dose" and "wrong drug" errors. The "wrong dose" error was the most frequently reported (from 7 to 67.4%, median = 31.5% [IQR:20.5-44.5]). We report the associated mechanism for each type of medication described (those due to CPOE or those occurring despite CPOE). We observed very heterogeneous results, probably due to the definition of error, the type of health information system used for the study, and the data collection method used. Each data collection method provides valuable and useful information concerning the prevalence and specific types of errors related to CPOE systems. The reporting of prescription errors should be continued because the weaknesses of CPOE systems are potential sources of error. Analysis of the mechanisms behind CPOE

  17. Characterization of electromagnetic fields in the aSPECT spectrometer and reduction of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Ayala Guardia, Fidel

    2011-10-15

    The aSPECT spectrometer has been designed to measure, with high precision, the recoil proton spectrum of the free neutron decay. From this spectrum, the electron antineutrino angular correlation coefficient a can be extracted with high accuracy. The goal of the experiment is to determine the coefficient a with a total relative error smaller than 0.3%, well below the current literature value of 5%. First measurements with the aSPECT spectrometer were performed in the Forschungs-Neutronenquelle Heinz Maier-Leibnitz in Munich. However, time-dependent background instabilities prevented us from reporting a new value of a. The contents of this thesis are based on the latest measurements performed with the aSPECT spectrometer at the Institut Laue-Langevin (ILL) in Grenoble, France. In these measurements, background instabilities were considerably reduced. Furthermore, diverse modifications intended to minimize systematic errors and to achieve a more reliable setup were successfully performed. Unfortunately, saturation effects of the detector electronics turned out to be too high to determine a meaningful result. However, this and other systematics were identified and decreased, or even eliminated, for future aSPECT beamtimes. The central part of this work is focused on the analysis and improvement of systematic errors related to the aSPECT electromagnetic fields. This work yielded in many improvements, particularly in the reduction of the systematic effects due to electric fields. The systematics related to the aSPECT magnetic field were also minimized and determined down to a level which permits to improve the present literature value of a. Furthermore, a custom NMR-magnetometer was developed and improved during this thesis, which will lead to reduction of magnetic field-related uncertainties down to a negligible level to determine a with a total relative error of at least 0.3%.

  18. Characterization of electromagnetic fields in the αSPECTspectrometer and reduction of systematic errors

    International Nuclear Information System (INIS)

    Ayala Guardia, Fidel

    2011-10-01

    The aSPECT spectrometer has been designed to measure, with high precision, the recoil proton spectrum of the free neutron decay. From this spectrum, the electron antineutrino angular correlation coefficient a can be extracted with high accuracy. The goal of the experiment is to determine the coefficient a with a total relative error smaller than 0.3%, well below the current literature value of 5%. First measurements with the aSPECT spectrometer were performed in the Forschungs-Neutronenquelle Heinz Maier-Leibnitz in Munich. However, time-dependent background instabilities prevented us from reporting a new value of a. The contents of this thesis are based on the latest measurements performed with the aSPECT spectrometer at the Institut Laue-Langevin (ILL) in Grenoble, France. In these measurements, background instabilities were considerably reduced. Furthermore, diverse modifications intended to minimize systematic errors and to achieve a more reliable setup were successfully performed. Unfortunately, saturation effects of the detector electronics turned out to be too high to determine a meaningful result. However, this and other systematics were identified and decreased, or even eliminated, for future aSPECT beamtimes. The central part of this work is focused on the analysis and improvement of systematic errors related to the aSPECT electromagnetic fields. This work yielded in many improvements, particularly in the reduction of the systematic effects due to electric fields. The systematics related to the aSPECT magnetic field were also minimized and determined down to a level which permits to improve the present literature value of a. Furthermore, a custom NMR-magnetometer was developed and improved during this thesis, which will lead to reduction of magnetic field-related uncertainties down to a negligible level to determine a with a total relative error of at least 0.3%.

  19. Systematic error in the precision measurement of the mean wavelength of a nearly monochromatic neutron beam due to geometric errors

    Energy Technology Data Exchange (ETDEWEB)

    Coakley, K.J., E-mail: kevin.coakley@nist.go [National Institute of Standards and Technology, 325 Broadway, Boulder, CO 80305 (United States); Dewey, M.S. [National Institute of Standards and Technology, Gaithersburg, MD (United States); Yue, A.T. [University of Tennessee, Knoxville, TN (United States); Laptev, A.B. [Tulane University, New Orleans, LA (United States)

    2009-12-11

    Many experiments at neutron scattering facilities require nearly monochromatic neutron beams. In such experiments, one must accurately measure the mean wavelength of the beam. We seek to reduce the systematic uncertainty of this measurement to approximately 0.1%. This work is motivated mainly by an effort to improve the measurement of the neutron lifetime determined from data collected in a 2003 in-beam experiment performed at NIST. More specifically, we seek to reduce systematic uncertainty by calibrating the neutron detector used in this lifetime experiment. This calibration requires simultaneous measurement of the responses of both the neutron detector used in the lifetime experiment and an absolute black neutron detector to a highly collimated nearly monochromatic beam of cold neutrons, as well as a separate measurement of the mean wavelength of the neutron beam. The calibration uncertainty will depend on the uncertainty of the measured efficiency of the black neutron detector and the uncertainty of the measured mean wavelength. The mean wavelength of the beam is measured by Bragg diffracting the beam from a nearly perfect silicon analyzer crystal. Given the rocking curve data and knowledge of the directions of the rocking axis and the normal to the scattering planes in the silicon crystal, one determines the mean wavelength of the beam. In practice, the direction of the rocking axis and the normal to the silicon scattering planes are not known exactly. Based on Monte Carlo simulation studies, we quantify systematic uncertainties in the mean wavelength measurement due to these geometric errors. Both theoretical and empirical results are presented and compared.

  20. Systematic error in the precision measurement of the mean wavelength of a nearly monochromatic neutron beam due to geometric errors

    Science.gov (United States)

    Coakley, K. J.; Dewey, M. S.; Yue, A. T.; Laptev, A. B.

    2009-12-01

    Many experiments at neutron scattering facilities require nearly monochromatic neutron beams. In such experiments, one must accurately measure the mean wavelength of the beam. We seek to reduce the systematic uncertainty of this measurement to approximately 0.1%. This work is motivated mainly by an effort to improve the measurement of the neutron lifetime determined from data collected in a 2003 in-beam experiment performed at NIST. More specifically, we seek to reduce systematic uncertainty by calibrating the neutron detector used in this lifetime experiment. This calibration requires simultaneous measurement of the responses of both the neutron detector used in the lifetime experiment and an absolute black neutron detector to a highly collimated nearly monochromatic beam of cold neutrons, as well as a separate measurement of the mean wavelength of the neutron beam. The calibration uncertainty will depend on the uncertainty of the measured efficiency of the black neutron detector and the uncertainty of the measured mean wavelength. The mean wavelength of the beam is measured by Bragg diffracting the beam from a nearly perfect silicon analyzer crystal. Given the rocking curve data and knowledge of the directions of the rocking axis and the normal to the scattering planes in the silicon crystal, one determines the mean wavelength of the beam. In practice, the direction of the rocking axis and the normal to the silicon scattering planes are not known exactly. Based on Monte Carlo simulation studies, we quantify systematic uncertainties in the mean wavelength measurement due to these geometric errors. Both theoretical and empirical results are presented and compared.

  1. Effects of residual hearing on cochlear implant outcomes in children: A systematic-review.

    Science.gov (United States)

    Chiossi, Julia Santos Costa; Hyppolito, Miguel Angelo

    2017-09-01

    to investigate if preoperative residual hearing in prelingually deafened children can interfere on cochlear implant indication and outcomes. a systematic-review was conducted in five international databases up to November-2016, to locate articles that evaluated cochlear implantation in children with some degree of preoperative residual hearing. Outcomes were auditory, language and cognition performances after cochlear implant. The quality of the studies was assessed and classified according to the Oxford Levels of Evidence table - 2011. Risk of biases were also described. From the 30 articles reviewed, two types of questions were identified: (a) what are the benefits of cochlear implantation in children with residual hearing? (b) is the preoperative residual hearing a predictor of cochlear implant outcome? Studies ranged from 04 to 188 subjects, evaluating populations between 1.8 and 10.3 years old. The definition of residual hearing varied between studies. The majority of articles (n = 22) evaluated speech perception as the outcome and 14 also assessed language and speech production. There is evidence that cochlear implant is beneficial to children with residual hearing. Preoperative residual hearing seems to be valuable to predict speech perception outcomes after cochlear implantation, even though the mechanism of how it happens is not clear. More extensive researches must be conducted in order to make recommendations and to set prognosis for cochlear implants based on children preoperative residual hearing. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Insights on the impact of systematic model errors on data assimilation performance in changing catchments

    Science.gov (United States)

    Pathiraja, S.; Anghileri, D.; Burlando, P.; Sharma, A.; Marshall, L.; Moradkhani, H.

    2018-03-01

    The global prevalence of rapid and extensive land use change necessitates hydrologic modelling methodologies capable of handling non-stationarity. This is particularly true in the context of Hydrologic Forecasting using Data Assimilation. Data Assimilation has been shown to dramatically improve forecast skill in hydrologic and meteorological applications, although such improvements are conditional on using bias-free observations and model simulations. A hydrologic model calibrated to a particular set of land cover conditions has the potential to produce biased simulations when the catchment is disturbed. This paper sheds new light on the impacts of bias or systematic errors in hydrologic data assimilation, in the context of forecasting in catchments with changing land surface conditions and a model calibrated to pre-change conditions. We posit that in such cases, the impact of systematic model errors on assimilation or forecast quality is dependent on the inherent prediction uncertainty that persists even in pre-change conditions. Through experiments on a range of catchments, we develop a conceptual relationship between total prediction uncertainty and the impacts of land cover changes on the hydrologic regime to demonstrate how forecast quality is affected when using state estimation Data Assimilation with no modifications to account for land cover changes. This work shows that systematic model errors as a result of changing or changed catchment conditions do not always necessitate adjustments to the modelling or assimilation methodology, for instance through re-calibration of the hydrologic model, time varying model parameters or revised offline/online bias estimation.

  3. Minimization and Mitigation of Wire EDM Cutting Errors in the Application of the Contour Method of Residual Stress Measurement

    Science.gov (United States)

    Ahmad, Bilal; Fitzpatrick, Michael E.

    2016-01-01

    The contour method of residual stress measurement relies on the careful application of wire electro-discharge machining (WEDM) for the cutting stage. Changes in material removal rates during the cut lead to errors in the final calculated values of residual stress. In this study, WEDM cutting parameters have been explored to identify the optimum conditions for contour method residual stress measurements. The influence of machine parameters on the surface roughness and cutting artifacts in the contour cut is discussed. It has been identified that the critical parameter in improving the surface finish is the spark pulse duration. A typical cutting artifact and its impact on measured stress values have been identified and demonstrated for a contour cut in a welded marine steel. A procedure is presented to correct contour displacement data from the influence of WEDM cutting artifacts, and is demonstrated on the correction of a measured weld residual stress. The corrected contour method improved the residual stress magnitude up to 150 MPa. The corrected contour method results were validated by X-ray diffraction, incremental center hole drilling, and neutron diffraction.

  4. Systematic and random errors in self-mixing measurements: effect of the developing speckle statistics.

    Science.gov (United States)

    Donati, Silvano; Martini, Giuseppe

    2014-08-01

    We consider the errors introduced by speckle pattern statistics of a diffusing target in the measurement of large displacements made with a self-mixing interferometer (SMI), with sub-λ resolution and a range up to meters. As the source on the target side, we assume a diffuser with randomly distributed roughness. Two cases are considered: (i) a developing randomness in z-height profile, with standard deviation σ(z), increasing from ≪λ to ≫λ and uncorrelated spatially (x,y), and (ii) a fully developed z-height randomness (σ(z)≫λ) but spatially correlated with various correlation sizes ρ(x,y). We find that systematic and random errors of all types of diffusers converge to that of a uniformly illuminated diffuser, independent of the actual profile of radiant emittance and phase distribution, when the standard deviation σ(z) is increased or the scale of correlation ρ(x,y) is decreased. This convergence is a sign of speckle statistics development, as all distributions end up with the same errors of the fully developed diffuser. Convergence is earlier for a Gaussian-distributed amplitude than for other spot distributions. As an application of simulation results, we plot systematic and random errors of SMI measurements of displacement versus distance, for different source distributions standard deviations and correlations, both for intra- and inter-speckle displacements.

  5. The Effects of Bar-coding Technology on Medication Errors: A Systematic Literature Review.

    Science.gov (United States)

    Hutton, Kevin; Ding, Qian; Wellman, Gregory

    2017-02-24

    The bar-coding technology adoptions have risen drastically in U.S. health systems in the past decade. However, few studies have addressed the impact of bar-coding technology with strong prospective methodologies and the research, which has been conducted from both in-pharmacy and bedside implementations. This systematic literature review is to examine the effectiveness of bar-coding technology on preventing medication errors and what types of medication errors may be prevented in the hospital setting. A systematic search of databases was performed from 1998 to December 2016. Studies measuring the effect of bar-coding technology on medication errors were included in a full-text review. Studies with the outcomes other than medication errors such as efficiency or workarounds were excluded. The outcomes were measured and findings were summarized for each retained study. A total of 2603 articles were initially identified and 10 studies, which used prospective before-and-after study design, were fully reviewed in this article. Of the 10 included studies, 9 took place in the United States, whereas the remaining was conducted in the United Kingdom. One research article focused on bar-coding implementation in a pharmacy setting, whereas the other 9 focused on bar coding within patient care areas. All 10 studies showed overall positive effects associated with bar-coding implementation. The results of this review show that bar-coding technology may reduce medication errors in hospital settings, particularly on preventing targeted wrong dose, wrong drug, wrong patient, unauthorized drug, and wrong route errors.

  6. The causes of and factors associated with prescribing errors in hospital inpatients: a systematic review.

    Science.gov (United States)

    Tully, Mary P; Ashcroft, Darren M; Dornan, Tim; Lewis, Penny J; Taylor, David; Wass, Val

    2009-01-01

    Prescribing errors are common, they result in adverse events and harm to patients and it is unclear how best to prevent them because recommendations are more often based on surmized rather than empirically collected data. The aim of this systematic review was to identify all informative published evidence concerning the causes of and factors associated with prescribing errors in specialist and non-specialist hospitals, collate it, analyse it qualitatively and synthesize conclusions from it. Seven electronic databases were searched for articles published between 1985-July 2008. The reference lists of all informative studies were searched for additional citations. To be included, a study had to be of handwritten prescriptions for adult or child inpatients that reported empirically collected data on the causes of or factors associated with errors. Publications in languages other than English and studies that evaluated errors for only one disease, one route of administration or one type of prescribing error were excluded. Seventeen papers reporting 16 studies, selected from 1268 papers identified by the search, were included in the review. Studies from the US and the UK in university-affiliated hospitals predominated (10/16 [62%]). The definition of a prescribing error varied widely and the included studies were highly heterogeneous. Causes were grouped according to Reason's model of accident causation into active failures, error-provoking conditions and latent conditions. The active failure most frequently cited was a mistake due to inadequate knowledge of the drug or the patient. Skills-based slips and memory lapses were also common. Where error-provoking conditions were reported, there was at least one per error. These included lack of training or experience, fatigue, stress, high workload for the prescriber and inadequate communication between healthcare professionals. Latent conditions included reluctance to question senior colleagues and inadequate provision of

  7. Interventions to reduce dosing errors in children: a systematic review of the literature.

    Science.gov (United States)

    Conroy, Sharon; Sweis, Dimah; Planner, Claire; Yeung, Vincent; Collier, Jacqueline; Haines, Linda; Wong, Ian C K

    2007-01-01

    Children are a particularly challenging group of patients when trying to ensure the safe use of medicines. The increased need for calculations, dilutions and manipulations of paediatric medicines, together with a need to dose on an individual patient basis using age, gestational age, weight and surface area, means that they are more prone to medication errors at each stage of the medicines management process. It is already known that dose calculation errors are the most common type of medication error in neonatal and paediatric patients. Interventions to reduce the risk of dose calculation errors are therefore urgently needed. A systematic literature review was conducted to identify published articles reporting interventions; 28 studies were found to be relevant. The main interventions found were computerised physician order entry (CPOE) and computer-aided prescribing. Most CPOE and computer-aided prescribing studies showed some degree of reduction in medication errors, with some claiming no errors occurring after implementation of the intervention. However, one study showed a significant increase in mortality after the implementation of CPOE. Further research is needed to investigate outcomes such as mortality and economics. Unit dose dispensing systems and educational/risk management programmes were also shown to reduce medication errors in children. Although it is suggested that 'smart' intravenous pumps can potentially reduce infusion errors in children, there is insufficient information to draw a conclusion because of a lack of research. Most interventions identified were US based, and since medicine management processes are currently different in different countries, there is a need to interpret the information carefully when considering implementing interventions elsewhere.

  8. Causes of medication administration errors in hospitals: a systematic review of quantitative and qualitative evidence.

    Science.gov (United States)

    Keers, Richard N; Williams, Steven D; Cooke, Jonathan; Ashcroft, Darren M

    2013-11-01

    Underlying systems factors have been seen to be crucial contributors to the occurrence of medication errors. By understanding the causes of these errors, the most appropriate interventions can be designed and implemented to minimise their occurrence. This study aimed to systematically review and appraise empirical evidence relating to the causes of medication administration errors (MAEs) in hospital settings. Nine electronic databases (MEDLINE, EMBASE, International Pharmaceutical Abstracts, ASSIA, PsycINFO, British Nursing Index, CINAHL, Health Management Information Consortium and Social Science Citations Index) were searched between 1985 and May 2013. Inclusion and exclusion criteria were applied to identify eligible publications through title analysis followed by abstract and then full text examination. English language publications reporting empirical data on causes of MAEs were included. Reference lists of included articles and relevant review papers were hand searched for additional studies. Studies were excluded if they did not report data on specific MAEs, used accounts from individuals not directly involved in the MAE concerned or were presented as conference abstracts with insufficient detail. A total of 54 unique studies were included. Causes of MAEs were categorised according to Reason's model of accident causation. Studies were assessed to determine relevance to the research question and how likely the results were to reflect the potential underlying causes of MAEs based on the method(s) used. Slips and lapses were the most commonly reported unsafe acts, followed by knowledge-based mistakes and deliberate violations. Error-provoking conditions influencing administration errors included inadequate written communication (prescriptions, documentation, transcription), problems with medicines supply and storage (pharmacy dispensing errors and ward stock management), high perceived workload, problems with ward-based equipment (access, functionality

  9. A more realistic estimate of the variances and systematic errors in spherical harmonic geomagnetic field models

    DEFF Research Database (Denmark)

    Lowes, F.J.; Olsen, Nils

    2004-01-01

    , led to quite inaccurate variance estimates. We estimate correction factors which range from 1/4 to 20, with the largest increases being for the zonal, m = 0, and sectorial, m = n, terms. With no correction, the OSVM variances give a mean-square vector field error of prediction over the Earth's surface......Most modern spherical harmonic geomagnetic models based on satellite data include estimates of the variances of the spherical harmonic coefficients of the model; these estimates are based on the geometry of the data and the fitting functions, and on the magnitude of the residuals. However...

  10. The sensitivity of patient specific IMRT QC to systematic MLC leaf bank offset errors

    Energy Technology Data Exchange (ETDEWEB)

    Rangel, Alejandra; Palte, Gesa; Dunscombe, Peter [Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2, Canada and Department of Physics and Astronomy, University of Calgary, 2500 University Drive North West, Calgary, Alberta T2N 1N4 (Canada); Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada); Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada); Department of Physics and Astronomy, University of Calgary, 2500 University Drive NW, Calgary, Alberta T2N 1N4 (Canada) and Department of Oncology, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada)

    2010-07-15

    Purpose: Patient specific IMRT QC is performed routinely in many clinics as a safeguard against errors and inaccuracies which may be introduced during the complex planning, data transfer, and delivery phases of this type of treatment. The purpose of this work is to evaluate the feasibility of detecting systematic errors in MLC leaf bank position with patient specific checks. Methods: 9 head and neck (H and N) and 14 prostate IMRT beams were delivered using MLC files containing systematic offsets ({+-}1 mm in two banks, {+-}0.5 mm in two banks, and 1 mm in one bank of leaves). The beams were measured using both MAPCHECK (Sun Nuclear Corp., Melbourne, FL) and the aS1000 electronic portal imaging device (Varian Medical Systems, Palo Alto, CA). Comparisons with calculated fields, without offsets, were made using commonly adopted criteria including absolute dose (AD) difference, relative dose difference, distance to agreement (DTA), and the gamma index. Results: The criteria most sensitive to systematic leaf bank offsets were the 3% AD, 3 mm DTA for MAPCHECK and the gamma index with 2% AD and 2 mm DTA for the EPID. The criterion based on the relative dose measurements was the least sensitive to MLC offsets. More highly modulated fields, i.e., H and N, showed greater changes in the percentage of passing points due to systematic MLC inaccuracy than prostate fields. Conclusions: None of the techniques or criteria tested is sufficiently sensitive, with the population of IMRT fields, to detect a systematic MLC offset at a clinically significant level on an individual field. Patient specific QC cannot, therefore, substitute for routine QC of the MLC itself.

  11. Adverse Drug Events and Medication Errors in African Hospitals: A Systematic Review.

    Science.gov (United States)

    Mekonnen, Alemayehu B; Alhawassi, Tariq M; McLachlan, Andrew J; Brien, Jo-Anne E

    2018-03-01

    Medication errors and adverse drug events are universal problems contributing to patient harm but the magnitude of these problems in Africa remains unclear. The objective of this study was to systematically investigate the literature on the extent of medication errors and adverse drug events, and the factors contributing to medication errors in African hospitals. We searched PubMed, MEDLINE, EMBASE, Web of Science and Global Health databases from inception to 31 August, 2017 and hand searched the reference lists of included studies. Original research studies of any design published in English that investigated adverse drug events and/or medication errors in any patient population in the hospital setting in Africa were included. Descriptive statistics including median and interquartile range were presented. Fifty-one studies were included; of these, 33 focused on medication errors, 15 on adverse drug events, and three studies focused on medication errors and adverse drug events. These studies were conducted in nine (of the 54) African countries. In any patient population, the median (interquartile range) percentage of patients reported to have experienced any suspected adverse drug event at hospital admission was 8.4% (4.5-20.1%), while adverse drug events causing admission were reported in 2.8% (0.7-6.4%) of patients but it was reported that a median of 43.5% (20.0-47.0%) of the adverse drug events were deemed preventable. Similarly, the median mortality rate attributed to adverse drug events was reported to be 0.1% (interquartile range 0.0-0.3%). The most commonly reported types of medication errors were prescribing errors, occurring in a median of 57.4% (interquartile range 22.8-72.8%) of all prescriptions and a median of 15.5% (interquartile range 7.5-50.6%) of the prescriptions evaluated had dosing problems. Major contributing factors for medication errors reported in these studies were individual practitioner factors (e.g. fatigue and inadequate knowledge

  12. On the effects of systematic errors in analysis of nuclear scattering data.

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, M.T.; Steward, C.; Amos, K.; Allen, L.J.

    1995-07-05

    The effects of systematic errors on elastic scattering differential cross-section data upon the assessment of quality fits to that data have been studied. Three cases are studied, namely the differential cross-section data sets from elastic scattering of 200 MeV protons from {sup 12}C, of 350 MeV {sup 16}O-{sup 16}O scattering and of 288.6 MeV {sup 12}C-{sup 12}C scattering. First, to estimate the probability of any unknown systematic errors, select sets of data have been processed using the method of generalized cross validation; a method based upon the premise that any data set should satisfy an optimal smoothness criterion. In another case, the S function that provided a statistically significant fit to data, upon allowance for angle variation, became overdetermined. A far simpler S function form could then be found to describe the scattering process. The S functions so obtained have been used in a fixed energy inverse scattering study to specify effective, local, Schroedinger potentials for the collisions. An error analysis has been performed on the results to specify confidence levels for those interactions. 19 refs., 6 tabs., 15 figs.

  13. Fitting correlated residual error structures in nonlinear mixed-effects models using SAS PROC NLMIXED.

    Science.gov (United States)

    Harring, Jeffrey R; Blozis, Shelley A

    2014-06-01

    Nonlinear mixed-effects (NLME) models remain popular among practitioners for analyzing continuous repeated measures data taken on each of a number of individuals when interest centers on characterizing individual-specific change. Within this framework, variation and correlation among the repeated measurements may be partitioned into interindividual variation and intraindividual variation components. The covariance structure of the residuals are, in many applications, consigned to be independent with homogeneous variances, [Formula: see text], not because it is believed that intraindividual variation adheres to this structure, but because many software programs that estimate parameters of such models are not well-equipped to handle other, possibly more realistic, patterns. In this article, we describe how the programmatic environment within SAS may be utilized to model residual structures for serial correlation and variance heterogeneity. An empirical example is used to illustrate the capabilities of the module.

  14. 12 h shifts and rates of error among nurses: A systematic review.

    Science.gov (United States)

    Clendon, Jill; Gibbons, Veronique

    2015-07-01

    To determine the effect of working 12 h or more on a single shift in an acute care hospital setting compared with working less than 12 h on rates of error among nurses. Systematic review. A three-step search strategy was utilised. An initial search of Cochrane, the Joanna Briggs Institute (JBI), MEDLINE and CINAHL was undertaken. A second search using all identified keywords and index terms was then undertaken across all included databases (Embase, Current contents, Proquest Nursing and Allied Health Source, Proquest Theses and Dissertations, Dissertation Abstracts International). Thirdly, reference lists of identified reports and articles were searched for additional studies. Studies published in English before August 2014 were included. Following review of title and abstract of 5429 publications, 26 studies were identified as meeting the inclusion criteria and selected for full retrieval and assessment for methodological quality. Of these, 13 were of sufficient quality to be included for review. Six studies reported higher rates of error for nurses working greater than 12 h on a single shift, four reported higher rates of error on shifts of up to 8 h, and three reported no difference. The six studies reporting significant rises in error rates among nurses working 12 h or more on a single shift comprised 89% of the total sample size (N=60,780 with the total sample size N=67,967). The risk of making an error appears higher among nurses working 12 h or longer on a single shift in acute care hospitals. Hospitals and units currently operating 12 h shift systems should review this scheduling practice due to the potential negative impact on patient outcomes. Further research is required to consider factors that may mitigate the risk of error where 12 h shifts are scheduled and this cannot be changed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Benefits and risks of using smart pumps to reduce medication error rates: a systematic review.

    Science.gov (United States)

    Ohashi, Kumiko; Dalleur, Olivia; Dykes, Patricia C; Bates, David W

    2014-12-01

    Smart infusion pumps have been introduced to prevent medication errors and have been widely adopted nationally in the USA, though they are not always used in Europe or other regions. Despite widespread usage of smart pumps, intravenous medication errors have not been fully eliminated. Through a systematic review of recent studies and reports regarding smart pump implementation and use, we aimed to identify the impact of smart pumps on error reduction and on the complex process of medication administration, and strategies to maximize the benefits of smart pumps. The medical literature related to the effects of smart pumps for improving patient safety was searched in PUBMED, EMBASE, and the Cochrane Central Register of Controlled Trials (CENTRAL) (2000-2014) and relevant papers were selected by two researchers. After the literature search, 231 papers were identified and the full texts of 138 articles were assessed for eligibility. Of these, 22 were included after removal of papers that did not meet the inclusion criteria. We assessed both the benefits and negative effects of smart pumps from these studies. One of the benefits of using smart pumps was intercepting errors such as the wrong rate, wrong dose, and pump setting errors. Other benefits include reduction of adverse drug event rates, practice improvements, and cost effectiveness. Meanwhile, the current issues or negative effects related to using smart pumps were lower compliance rates of using smart pumps, the overriding of soft alerts, non-intercepted errors, or the possibility of using the wrong drug library. The literature suggests that smart pumps reduce but do not eliminate programming errors. Although the hard limits of a drug library play a main role in intercepting medication errors, soft limits were still not as effective as hard limits because of high override rates. Compliance in using smart pumps is key towards effectively preventing errors. Opportunities for improvement include upgrading drug

  16. Unaccounted source of systematic errors in measurements of the Newtonian gravitational constant G

    International Nuclear Information System (INIS)

    DeSalvo, Riccardo

    2015-01-01

    Many precision measurements of G have produced a spread of results incompatible with measurement errors. Clearly an unknown source of systematic errors is at work. It is proposed here that most of the discrepancies derive from subtle deviations from Hooke's law, caused by avalanches of entangled dislocations. The idea is supported by deviations from linearity reported by experimenters measuring G, similarly to what is observed, on a larger scale, in low-frequency spring oscillators. Some mitigating experimental apparatus modifications are suggested. - Highlights: • Source of discrepancies on universal gravitational constant G measurements. • Collective motion of dislocations results in breakdown of Hook's law. • Self-organized criticality produce non-predictive shifts of equilibrium point. • New dissipation mechanism different from loss angle and viscous models is necessary. • Mitigation measures proposed may bring coherence to the measurements of G

  17. An Adaptive Systematic Lossy Error Protection Scheme for Broadcast Applications Based on Frequency Filtering and Unequal Picture Protection

    Directory of Open Access Journals (Sweden)

    Marie Ramon

    2009-01-01

    Full Text Available Systematic lossy error protection (SLEP is a robust error resilient mechanism based on principles of Wyner-Ziv (WZ coding for video transmission over error-prone networks. In an SLEP scheme, the video bitstream is separated into two parts: a systematic part consisting of a video sequence transmitted without channel coding, and additional information consisting of a WZ supplementary stream. This paper presents an adaptive SLEP scheme in which the WZ stream is obtained by frequency filtering in the transform domain. Additionally, error resilience varies adaptively depending on the characteristics of compressed video. We show that the proposed SLEP architecture achieves graceful degradation of reconstructed video quality in the presence of increasing transmission errors. Moreover, it provides good performances in terms of error protection as well as reconstructed video quality if compared to solutions based on coarser quantization, while offering an interesting embedded scheme to apply digital video format conversion.

  18. Residual sweeping errors in turbulent particle pair diffusion in a Lagrangian diffusion model.

    Directory of Open Access Journals (Sweden)

    Nadeem A Malik

    Full Text Available Thomson, D. J. & Devenish, B. J. [J. Fluid Mech. 526, 277 (2005] and others have suggested that sweeping effects make Lagrangian properties in Kinematic Simulations (KS, Fung et al [Fung J. C. H., Hunt J. C. R., Malik N. A. & Perkins R. J. J. Fluid Mech. 236, 281 (1992], unreliable. However, such a conclusion can only be drawn under the assumption of locality. The major aim here is to quantify the sweeping errors in KS without assuming locality. Through a novel analysis based upon analysing pairs of particle trajectories in a frame of reference moving with the large energy containing scales of motion it is shown that the normalized integrated error [Formula: see text] in the turbulent pair diffusivity (K due to the sweeping effect decreases with increasing pair separation (σl, such that [Formula: see text] as σl/η → ∞; and [Formula: see text] as σl/η → 0. η is the Kolmogorov turbulence microscale. There is an intermediate range of separations 1 < σl/η < ∞ in which the error [Formula: see text] remains negligible. Simulations using KS shows that in the swept frame of reference, this intermediate range is large covering almost the entire inertial subrange simulated, 1 < σl/η < 105, implying that the deviation from locality observed in KS cannot be atributed to sweeping errors. This is important for pair diffusion theory and modeling. PACS numbers: 47.27.E?, 47.27.Gs, 47.27.jv, 47.27.Ak, 47.27.tb, 47.27.eb, 47.11.-j.

  19. Prevalence and spectrum of residual symptoms in Lyme neuroborreliosis after pharmacological treatment: a systematic review.

    Science.gov (United States)

    Dersch, R; Sommer, H; Rauer, S; Meerpohl, J J

    2016-01-01

    Controversy exists about residual symptoms after pharmacological treatment of Lyme neuroborreliosis. Reports of disabling long-term sequels lead to concerns in patients and health care providers. We systematically reviewed the available evidence from studies reporting treatment of Lyme neuroborreliosis to assess the prevalence and spectrum of residual symptoms after treatment. A literature search was performed in three databases and three clinical trial registers to find eligible studies reporting on residual symptoms in patients after pharmacological treatment of LNB. Diagnosis must have been performed according to consensus-derived case definitions. No restrictions regarding study design or language were set. Symptom prevalence was pooled using a random-effects model. Forty-four eligible clinical trials and studies were found: 8 RCTs, 17 cohort studies, 2 case-control studies, and 17 case series. The follow-up period in the eligible studies ranged from 7 days to 20 years. The weighted mean proportion of residual symptoms was 28 % (95 % CI 23-34 %, n = 34 studies) for the latest reported time point. Prevalence of residual symptoms was statistically significantly higher in studies using the "possible" case definition (p = 0.0048). Cranial neuropathy, pain, paresis, cognitive disturbances, headache, and fatigue were statistically significantly lower in studies using the "probable/definite" case definition. LNB patients may experience residual symptoms after treatment with a prevalence of approximately 28 %. The prevalence and spectrum of residual symptoms differ according to the applied case definition. Symptoms like fatigue are not reported in studies using the "probable/definite" case definition. As the "possible" case definition is more unspecific, patients with other conditions may be included. Reports of debilitating fatigue and cognitive impairment after LNB, a "post-Lyme syndrome", could therefore be an artifact of unspecific case definitions in single

  20. Carers' Medication Administration Errors in the Domiciliary Setting: A Systematic Review.

    Directory of Open Access Journals (Sweden)

    Anam Parand

    Full Text Available Medications are mostly taken in patients' own homes, increasingly administered by carers, yet studies of medication safety have been largely conducted in the hospital setting. We aimed to review studies of how carers cause and/or prevent medication administration errors (MAEs within the patient's home; to identify types, prevalence and causes of these MAEs and any interventions to prevent them.A narrative systematic review of literature published between 1 Jan 1946 and 23 Sep 2013 was carried out across the databases EMBASE, MEDLINE, PSYCHINFO, COCHRANE and CINAHL. Empirical studies were included where carers were responsible for preventing/causing MAEs in the home and standardised tools used for data extraction and quality assessment.Thirty-six papers met the criteria for narrative review, 33 of which included parents caring for children, two predominantly comprised adult children and spouses caring for older parents/partners, and one focused on paid carers mostly looking after older adults. The carer administration error rate ranged from 1.9 to 33% of medications administered and from 12 to 92.7% of carers administering medication. These included dosage errors, omitted administration, wrong medication and wrong time or route of administration. Contributory factors included individual carer factors (e.g. carer age, environmental factors (e.g. storage, medication factors (e.g. number of medicines, prescription communication factors (e.g. comprehensibility of instructions, psychosocial factors (e.g. carer-to-carer communication, and care-recipient factors (e.g. recipient age. The few interventions effective in preventing MAEs involved carer training and tailored equipment.This review shows that home medication administration errors made by carers are a potentially serious patient safety issue. Carers made similar errors to those made by professionals in other contexts and a wide variety of contributory factors were identified. The home care

  1. Carers' Medication Administration Errors in the Domiciliary Setting: A Systematic Review.

    Science.gov (United States)

    Parand, Anam; Garfield, Sara; Vincent, Charles; Franklin, Bryony Dean

    2016-01-01

    Medications are mostly taken in patients' own homes, increasingly administered by carers, yet studies of medication safety have been largely conducted in the hospital setting. We aimed to review studies of how carers cause and/or prevent medication administration errors (MAEs) within the patient's home; to identify types, prevalence and causes of these MAEs and any interventions to prevent them. A narrative systematic review of literature published between 1 Jan 1946 and 23 Sep 2013 was carried out across the databases EMBASE, MEDLINE, PSYCHINFO, COCHRANE and CINAHL. Empirical studies were included where carers were responsible for preventing/causing MAEs in the home and standardised tools used for data extraction and quality assessment. Thirty-six papers met the criteria for narrative review, 33 of which included parents caring for children, two predominantly comprised adult children and spouses caring for older parents/partners, and one focused on paid carers mostly looking after older adults. The carer administration error rate ranged from 1.9 to 33% of medications administered and from 12 to 92.7% of carers administering medication. These included dosage errors, omitted administration, wrong medication and wrong time or route of administration. Contributory factors included individual carer factors (e.g. carer age), environmental factors (e.g. storage), medication factors (e.g. number of medicines), prescription communication factors (e.g. comprehensibility of instructions), psychosocial factors (e.g. carer-to-carer communication), and care-recipient factors (e.g. recipient age). The few interventions effective in preventing MAEs involved carer training and tailored equipment. This review shows that home medication administration errors made by carers are a potentially serious patient safety issue. Carers made similar errors to those made by professionals in other contexts and a wide variety of contributory factors were identified. The home care setting should

  2. Factors contributing to medication errors made when using computerized order entry in pediatrics: a systematic review.

    Science.gov (United States)

    Tolley, Clare L; Forde, Niamh E; Coffey, Katherine L; Sittig, Dean F; Ash, Joan S; Husband, Andrew K; Bates, David W; Slight, Sarah P

    2017-10-26

    To identify and understand the factors that contribute to medication errors associated with the use of computerized provider order entry (CPOE) in pediatrics and provide recommendations on how CPOE systems could be improved. We conducted a systematic literature review across 3 large databases: the Cumulative Index to Nursing and Allied Health Literature, Embase, and Medline. Three independent reviewers screened the titles, and 2 authors then independently reviewed all abstracts and full texts, with 1 author acting as a constant across all publications. Data were extracted onto a customized data extraction sheet, and a narrative synthesis of all eligible studies was undertaken. A total of 47 articles were included in this review. We identified 5 factors that contributed to errors with the use of a CPOE system: (1) lack of drug dosing alerts, which failed to detect calculation errors; (2) generation of inappropriate dosing alerts, such as warnings based on incorrect drug indications; (3) inappropriate drug duplication alerts, as a result of the system failing to consider factors such as the route of administration; (4) dropdown menu selection errors; and (5) system design issues, such as a lack of suitable dosing options for a particular drug. This review highlights 5 key factors that contributed to the occurrence of CPOE-related medication errors in pediatrics. Dosing support is the most important. More advanced clinical decision support that can suggest doses based on the drug indication is needed. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  3. The Thirty Gigahertz Instrument Receiver for the QUIJOTE Experiment: Preliminary Polarization Measurements and Systematic-Error Analysis.

    Science.gov (United States)

    Casas, Francisco J; Ortiz, David; Villa, Enrique; Cano, Juan L; Cagigas, Jaime; Pérez, Ana R; Aja, Beatriz; Terán, J Vicente; de la Fuente, Luisa; Artal, Eduardo; Hoyland, Roger; Génova-Santos, Ricardo

    2015-08-05

    This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process.

  4. Systematic errors due to linear congruential random-number generators with the Swendsen-Wang algorithm: a warning.

    Science.gov (United States)

    Ossola, Giovanni; Sokal, Alan D

    2004-08-01

    We show that linear congruential pseudo-random-number generators can cause systematic errors in Monte Carlo simulations using the Swendsen-Wang algorithm, if the lattice size is a multiple of a very large power of 2 and one random number is used per bond. These systematic errors arise from correlations within a single bond-update half-sweep. The errors can be eliminated (or at least radically reduced) by updating the bonds in a random order or in an aperiodic manner. It also helps to use a generator of large modulus (e.g., 60 or more bits).

  5. Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances.

    Science.gov (United States)

    Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvoček, Filip

    2015-08-06

    In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5-50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments' results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%.

  6. Design of a real-time spectroscopic rotating compensator ellipsometer without systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Broch, Laurent, E-mail: laurent.broch@univ-lorraine.fr [Laboratoire de Chimie Physique-Approche Multi-echelle des Milieux Complexes (LCP-A2MC, EA 4632), Universite de Lorraine, 1 boulevard Arago CP 87811, F-57078 Metz Cedex 3 (France); Stein, Nicolas [Institut Jean Lamour, Universite de Lorraine, UMR 7198 CNRS, 1 boulevard Arago CP 87811, F-57078 Metz Cedex 3 (France); Zimmer, Alexandre [Laboratoire Interdisciplinaire Carnot de Bourgogne, UMR 6303 CNRS, Universite de Bourgogne, 9 avenue Alain Savary BP 47870, F-21078 Dijon Cedex (France); Battie, Yann; Naciri, Aotmane En [Laboratoire de Chimie Physique-Approche Multi-echelle des Milieux Complexes (LCP-A2MC, EA 4632), Universite de Lorraine, 1 boulevard Arago CP 87811, F-57078 Metz Cedex 3 (France)

    2014-11-28

    We describe a spectroscopic ellipsometer in the visible domain (400–800 nm) based on a rotating compensator technology using two detectors. The classical analyzer is replaced by a fixed Rochon birefringent beamsplitter which splits the incidence light wave into two perpendicularly polarized waves, one oriented at + 45° and the other one at − 45° according to the plane of incidence. Both emergent optical signals are analyzed by two identical CCD detectors which are synchronized by an optical encoder fixed on the shaft of the step-by-step motor of the compensator. The final spectrum is the result of the two averaged Ψ and Δ spectra acquired by both detectors. We show that Ψ and Δ spectra are acquired without systematic errors on a spectral range fixed from 400 to 800 nm. The acquisition time can be adjusted down to 25 ms. The setup was validated by monitoring the first steps of bismuth telluride film electrocrystallization. The results exhibit that induced experimental growth parameters, such as film thickness and volumic fraction of deposited material can be extracted with a better trueness. - Highlights: • High-speed rotating compensator ellipsometer equipped with 2 detectors. • Ellipsometric angles without systematic errors • In-situ monitoring of electrocrystallization of bismuth telluride thin layer • High-accuracy of fitted physical parameters.

  7. Energy dependent mesh adaptivity of discontinuous isogeometric discrete ordinate methods with dual weighted residual error estimators

    Science.gov (United States)

    Owens, A. R.; Kópházi, J.; Welch, J. A.; Eaton, M. D.

    2017-04-01

    In this paper a hanging-node, discontinuous Galerkin, isogeometric discretisation of the multigroup, discrete ordinates (SN) equations is presented in which each energy group has its own mesh. The equations are discretised using Non-Uniform Rational B-Splines (NURBS), which allows the coarsest mesh to exactly represent the geometry for a wide range of engineering problems of interest; this would not be the case using straight-sided finite elements. Information is transferred between meshes via the construction of a supermesh. This is a non-trivial task for two arbitrary meshes, but is significantly simplified here by deriving every mesh from a common coarsest initial mesh. In order to take full advantage of this flexible discretisation, goal-based error estimators are derived for the multigroup, discrete ordinates equations with both fixed (extraneous) and fission sources, and these estimators are used to drive an adaptive mesh refinement (AMR) procedure. The method is applied to a variety of test cases for both fixed and fission source problems. The error estimators are found to be extremely accurate for linear NURBS discretisations, with degraded performance for quadratic discretisations owing to a reduction in relative accuracy of the "exact" adjoint solution required to calculate the estimators. Nevertheless, the method seems to produce optimal meshes in the AMR process for both linear and quadratic discretisations, and is ≈×100 more accurate than uniform refinement for the same amount of computational effort for a 67 group deep penetration shielding problem.

  8. Correction Model of BeiDou Code Systematic Multipath Errors and Its Impacts on Single-frequency PPP

    Directory of Open Access Journals (Sweden)

    WANG Jie

    2017-07-01

    Full Text Available There are systematic multipath errors on BeiDou code measurements, which are range from several decimeters to larger than 1 meter. They can be divided into two categories, which are systematic variances in IGSO/MEO code measurement and in GEO code measurement. In this contribution, a methodology of correcting BeiDou GEO code multipath is proposed base on Kalman filter algorithm. The standard deviation of GEO MP Series decreases about 10%~16% after correction. The weight of code in single-frequency PPP is great, therefore, code systematic multipath errors have impact on single-frequency PPP. Our analysis indicate that about 1 m bias will be caused by these systematic errors. Then, we evaluated the improvement of single-frequency PPP accuracy after code multipath correction. The systematic errors of GEO code measurements are corrected by applying our proposed Kalman filter method. The systematic errors of IGSO and MEO code measurements are corrected by applying elevation-dependent model proposed by Wanninger and Beer. Ten days observations of four MGEX (Multi-GNSS Experiment stations are processed. The results indicate that the single-frequency PPP accuracy can be improved remarkably by applying code multipath correction. The accuracy in up direction can be improved by 65% after IGSO and MEO code multipath correction. By applying GEO code multipath correction, the accuracy in up direction can be further improved by 15%.

  9. An Examination of the Spatial Distribution of Carbon Dioxide and Systematic Errors

    Science.gov (United States)

    Coffey, Brennan; Gunson, Mike; Frankenberg, Christian; Osterman, Greg

    2011-01-01

    The industrial period and modern age is characterized by combustion of coal, oil, and natural gas for primary energy and transportation leading to rising levels of atmospheric of CO2. This increase, which is being carefully measured, has ramifications throughout the biological world. Through remote sensing, it is possible to measure how many molecules of CO2 lie in a defined column of air. However, other gases and particles are present in the atmosphere, such as aerosols and water, which make such measurements more complicated1. Understanding the detailed geometry and path length of the observation is vital to computing the concentration of CO2. Comparing these satellite readings with ground-truth data (TCCON) the systematic errors arising from these sources can be assessed. Once the error is understood, it can be scaled for in the retrieval algorithms to create a set of data, which is closer to the TCCON measurements1. Using this process, the algorithms are being developed to reduce bias, within.1% worldwide of the true value. At this stage, the accuracy is within 1%, but through correcting small errors contained in the algorithms, such as accounting for the scattering of sunlight, the desired accuracy can be achieved.

  10. The reliability and measurement error of protractor-based goniometry of the fingers: A systematic review.

    Science.gov (United States)

    van Kooij, Yara E; Fink, Alexandra; Nijhuis-van der Sanden, Maria W; Speksnijder, Caroline M

    Systematic review PURPOSE OF THE STUDY: The purpose was to review the available literature for evidence on the reliability and measurement error of protractor-based goniometry assessment of the finger joints. Databases were searched for articles with key words "hand," "goniometry," "reliability," and derivatives of these terms. Assessment of the methodological quality was carried out using the Consensus-Based Standards for the Selection of Health Measurement Instruments checklist. Two independent reviewers performed a best evidence synthesis based on criteria proposed by Terwee et al (2007). Fifteen articles were included. One article was of fair methodological quality, and 14 articles were of poor methodological quality. An acceptable level for reliability (intraclass correlation coefficient > 0.70 or Pearson's correlation > 0.80) was reported in 1 study of fair methodological quality and in 8 articles of low methodological quality. Because the minimal important change was not calculated in the articles, there was an unknown level of evidence for the measurement error. Further research with adequate sample sizes should focus on reference outcomes for different patient groups. For valid therapy evaluation, it is important to know if the change in range of motion reflects a real change of the patient or if this is due to the measurement error of the goniometer. Until now, there is insufficient evidence to establish this cut-off point (the smallest detectable change). Following the Consensus-Based Standards for the Selection of Health Measurement Instruments criteria, there was limited level of evidence for an acceptable reliability in the dorsal measurement method and unknown level of evidence for the measurement error. 2a. Copyright © 2017 Hanley & Belfus. Published by Elsevier Inc. All rights reserved.

  11. Reducing Systematic Errors in Oxide Species with Density Functional Theory Calculations

    DEFF Research Database (Denmark)

    Christensen, Rune; Hummelshøj, Jens S.; Hansen, Heine Anton

    2015-01-01

    Density functional theory calculations can be used to gain valuable insight into the fundamental reaction processes in metal−oxygen systems, e.g., metal−oxygen batteries. Here, the ability of a range of different exchange-correlation functionals to reproduce experimental enthalpies of formation...... for different types of alkali and alkaline earth metal oxide species has been examined. Most examined functionals result in significant overestimation of the stability of superoxide species compared to peroxides and monoxides, which can result in erroneous prediction of reaction pathways. We show that if metal...... chlorides are used as reference structures instead of metals, the systematic errors are significantly reduced and functional variations decreased. Using a metal chloride reference, where the metal atoms are in the same oxidation state as in the oxide species, will provide a computationally inexpensive...

  12. RANDOM AND SYSTEMATIC FIELD ERRORS IN THE SNS RING: A STUDY OF THEIR EFFECTS AND COMPENSATION

    Energy Technology Data Exchange (ETDEWEB)

    GARDNER,C.J.; LEE,Y.Y.; WENG,W.T.

    1998-06-22

    The Accumulator Ring for the proposed Spallation Neutron Source (SNS) [l] is to accept a 1 ms beam pulse from a 1 GeV Proton Linac at a repetition rate of 60 Hz. For each beam pulse, 10{sup 14} protons (some 1,000 turns) are to be accumulated via charge-exchange injection and then promptly extracted to an external target for the production of neutrons by spallation. At this very high intensity, stringent limits (less than two parts in 10,000 per pulse) on beam loss during accumulation must be imposed in order to keep activation of ring components at an acceptable level. To stay within the desired limit, the effects of random and systematic field errors in the ring require careful attention. This paper describes the authors studies of these effects and the magnetic corrector schemes for their compensation.

  13. Variations in Learning Rate: Student Classification Based on Systematic Residual Error Patterns across Practice Opportunities

    Science.gov (United States)

    Liu, Ran; Koedinger, Kenneth R.

    2015-01-01

    A growing body of research suggests that accounting for student specific variability in educational data can improve modeling accuracy and may have implications for individualizing instruction. The Additive Factors Model (AFM), a logistic regression model used to fit educational data and discover/refine skill models of learning, contains a…

  14. Human-simulation-based learning to prevent medication error: A systematic review.

    Science.gov (United States)

    Sarfati, Laura; Ranchon, Florence; Vantard, Nicolas; Schwiertz, Vérane; Larbre, Virginie; Parat, Stéphanie; Faudel, Amélie; Rioufol, Catherine

    2018-01-31

    In the past 2 decades, there has been an increasing interest in simulation-based learning programs to prevent medication error (ME). To improve knowledge, skills, and attitudes in prescribers, nurses, and pharmaceutical staff, these methods enable training without directly involving patients. However, best practices for simulation for healthcare providers are as yet undefined. By analysing the current state of experience in the field, the present review aims to assess whether human simulation in healthcare helps to reduce ME. A systematic review was conducted on Medline from 2000 to June 2015, associating the terms "Patient Simulation," "Medication Errors," and "Simulation Healthcare." Reports of technology-based simulation were excluded, to focus exclusively on human simulation in nontechnical skills learning. Twenty-one studies assessing simulation-based learning programs were selected, focusing on pharmacy, medicine or nursing students, or concerning programs aimed at reducing administration or preparation errors, managing crises, or learning communication skills for healthcare professionals. The studies varied in design, methodology, and assessment criteria. Few demonstrated that simulation was more effective than didactic learning in reducing ME. This review highlights a lack of long-term assessment and real-life extrapolation, with limited scenarios and participant samples. These various experiences, however, help in identifying the key elements required for an effective human simulation-based learning program for ME prevention: ie, scenario design, debriefing, and perception assessment. The performance of these programs depends on their ability to reflect reality and on professional guidance. Properly regulated simulation is a good way to train staff in events that happen only exceptionally, as well as in standard daily activities. By integrating human factors, simulation seems to be effective in preventing iatrogenic risk related to ME, if the program is

  15. The effectiveness of interventions designed to reduce medication administration errors: a synthesis of findings from systematic reviews.

    Science.gov (United States)

    Lapkin, Samuel; Levett-Jones, Tracy; Chenoweth, Lynn; Johnson, Maree

    2016-10-01

    The aim of this overview was to examine the effectiveness of interventions designed to improve patient safety by reducing medication administration errors using data from systematic reviews. Medication administration errors remain unacceptably high despite the introduction of a range of interventions aimed at enhancing patient safety. Systematic reviews of strategies designed to improve medication safety report contradictory findings. A critical appraisal and synthesis of these findings are, therefore, warranted. A comprehensive three-step search strategy was employed to search across 10 electronic databases. Two reviewers independently examined the methodological rigour and scientific quality of included systematic reviews using the Assessment of Multiple Systematic Reviews protocol. Sixteen systematic reviews were eligible for inclusion. Evidence suggest that multifaceted approaches involving a combination education and risk management strategies and the use of bar code technology are effective in reducing medication errors. More research is needed to determine the benefits of routine double-checking of medications during administration by nurses, outcomes of self-administration of medications by capable patients, and associations between interruptions and medications errors. Medication-related incidents must be captured in a way that facilitates meaningful categorisation including contributing factors, potential and actual/risk of harm and contextual information on the incident. © 2016 John Wiley & Sons Ltd.

  16. Using residual stacking to mitigate site-specific errors in order to improve the quality of GNSS-based coordinate time series of CORS

    Science.gov (United States)

    Knöpfler, Andreas; Mayer, Michael; Heck, Bernhard

    2014-05-01

    Within the last decades, positioning using GNSS (Global Navigation Satellite Systems; e.g., GPS) has become a standard tool in many (geo-) sciences. The positioning methods Precise Point Positioning and differential point positioning based on carrier phase observations have been developed for a broad variety of applications with different demands for example on accuracy. In high precision applications, a lot of effort was invested to mitigate different error sources: the products for satellite orbits and satellite clocks were improved; the misbehaviour of satellite and receiver antennas compared to an ideal antenna is modelled by calibration values on absolute level, the modelling of the ionosphere and the troposphere is updated year by year. Therefore, within processing of data of CORS (continuously operating reference sites), equipped with geodetic hardware using a sophisticated strategy, the latest products and models nowadays enable positioning accuracies at low mm level. Despite the considerable improvements that have been achieved within GNSS data processing, a generally valid multipath model is still lacking. Therefore, site specific multipath still represents a major error source in precise GNSS positioning. Furthermore, the calibration information of receiving GNSS antennas, which is for instance derived by a robot or chamber calibration, is valid strictly speaking only for the location of the calibration. The calibrated antenna can show a slightly different behaviour at the CORS due to near field multipath effects. One very promising strategy to mitigate multipath effects as well as imperfectly calibrated receiver antennas is to stack observation residuals of several days, thereby, multipath-loaded observation residuals are analysed for example with respect to signal direction, to find and reduce systematic constituents. This presentation will give a short overview about existing stacking approaches. In addition, first results of the stacking approach

  17. The Residual Setup Errors of Different IGRT Alignment Procedures for Head and Neck IMRT and the Resulting Dosimetric Impact

    International Nuclear Information System (INIS)

    Graff, Pierre; Kirby, Neil; Weinberg, Vivian; Chen, Josephine; Yom, Sue S.; Lambert, Louise; Pouliot, Jean

    2013-01-01

    Purpose: To assess residual setup errors during head and neck radiation therapy and the resulting consequences for the delivered dose for various patient alignment procedures. Methods and Materials: Megavoltage cone beam computed tomography (MVCBCT) scans from 11 head and neck patients who underwent intensity modulated radiation therapy were used to assess setup errors. Each MVCBCT scan was registered to its reference planning kVCT, with seven different alignment procedures: automatic alignment and manual registration to 6 separate bony landmarks (sphenoid, left/right maxillary sinuses, mandible, cervical 1 [C1]-C2, and C7-thoracic 1 [T1] vertebrae). Shifts in the different alignments were compared with each other to determine whether there were any statistically significant differences. Then, the dose distribution was recalculated on 3 MVCBCT images per patient for every alignment procedure. The resulting dose-volume histograms for targets and organs at risk (OARs) were compared to those from the planning kVCTs. Results: The registration procedures produced statistically significant global differences in patient alignment and actual dose distribution, calling for a need for standardization of patient positioning. Vertically, the automatic, sphenoid, and maxillary sinuses alignments mainly generated posterior shifts and resulted in mean increases in maximal dose to OARs of >3% of the planned dose. The suggested choice of C1-C2 as a reference landmark appears valid, combining both OAR sparing and target coverage. Assuming this choice, relevant margins to apply around volumes of interest at the time of planning to take into account for the relative mobility of other regions are discussed. Conclusions: Use of different alignment procedures for treating head and neck patients produced variations in patient setup and dose distribution. With concern for standardizing practice, C1-C2 reference alignment with relevant margins around planning volumes seems to be a valid

  18. Adenine Enrichment at the Fourth CDS Residue in Bacterial Genes Is Consistent with Error Proofing for +1 Frameshifts.

    Science.gov (United States)

    Abrahams, Liam; Hurst, Laurence D

    2017-12-01

    Beyond selection for optimal protein functioning, coding sequences (CDSs) are under selection at the RNA and DNA levels. Here, we identify a possible signature of "dual-coding," namely extensive adenine (A) enrichment at bacterial CDS fourth sites. In 99.07% of studied bacterial genomes, fourth site A use is greater than expected given genomic A-starting codon use. Arguing for nucleotide level selection, A-starting serine and arginine second codons are heavily utilized when compared with their non-A starting synonyms. Several models have the ability to explain some of this trend. In part, A-enrichment likely reduces 5' mRNA stability, promoting translation initiation. However T/U, which may also reduce stability, is avoided. Further, +1 frameshifts on the initiating ATG encode a stop codon (TGA) provided A is the fourth residue, acting either as a frameshift "catch and destroy" or a frameshift stop and adjust mechanism and hence implicated in translation initiation. Consistent with both, genomes lacking TGA stop codons exhibit weaker fourth site A-enrichment. Sequences lacking a Shine-Dalgarno sequence and those without upstream leader genes, that may be more error prone during initiation, have greater utilization of A, again suggesting a role in initiation. The frameshift correction model is consistent with the notion that many genomic features are error-mitigation factors and provides the first evidence for site-specific out of frame stop codon selection. We conjecture that the NTG universal start codon may have evolved as a consequence of TGA being a stop codon and the ability of NTGA to rapidly terminate or adjust a ribosome. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  19. The Residual Setup Errors of Different IGRT Alignment Procedures for Head and Neck IMRT and the Resulting Dosimetric Impact

    Energy Technology Data Exchange (ETDEWEB)

    Graff, Pierre [Department of Radiation-Oncology, Helen Diller Family Comprehensive Cancer Center, University of California, San Francisco, California (United States); Radiation-Oncology, Alexis Vautrin Cancer Center, Vandoeuvre-Les-Nancy (France); Doctoral School BioSE (EA4360), Nancy (France); Kirby, Neil [Department of Radiation-Oncology, Helen Diller Family Comprehensive Cancer Center, University of California, San Francisco, California (United States); Weinberg, Vivian [Department of Radiation-Oncology, Helen Diller Family Comprehensive Cancer Center, University of California, San Francisco, California (United States); Department of Biostatistics, Helen Diller Family Comprehensive Cancer Center, University of California, San Francisco, California (United States); Chen, Josephine; Yom, Sue S. [Department of Radiation-Oncology, Helen Diller Family Comprehensive Cancer Center, University of California, San Francisco, California (United States); Lambert, Louise [Department of Radiation-Oncology, Helen Diller Family Comprehensive Cancer Center, University of California, San Francisco, California (United States); Radiation-Oncology, Montreal University Centre, Montreal (Canada); Pouliot, Jean, E-mail: jpouliot@radonc.ucsf.edu [Department of Radiation-Oncology, Helen Diller Family Comprehensive Cancer Center, University of California, San Francisco, California (United States)

    2013-05-01

    Purpose: To assess residual setup errors during head and neck radiation therapy and the resulting consequences for the delivered dose for various patient alignment procedures. Methods and Materials: Megavoltage cone beam computed tomography (MVCBCT) scans from 11 head and neck patients who underwent intensity modulated radiation therapy were used to assess setup errors. Each MVCBCT scan was registered to its reference planning kVCT, with seven different alignment procedures: automatic alignment and manual registration to 6 separate bony landmarks (sphenoid, left/right maxillary sinuses, mandible, cervical 1 [C1]-C2, and C7-thoracic 1 [T1] vertebrae). Shifts in the different alignments were compared with each other to determine whether there were any statistically significant differences. Then, the dose distribution was recalculated on 3 MVCBCT images per patient for every alignment procedure. The resulting dose-volume histograms for targets and organs at risk (OARs) were compared to those from the planning kVCTs. Results: The registration procedures produced statistically significant global differences in patient alignment and actual dose distribution, calling for a need for standardization of patient positioning. Vertically, the automatic, sphenoid, and maxillary sinuses alignments mainly generated posterior shifts and resulted in mean increases in maximal dose to OARs of >3% of the planned dose. The suggested choice of C1-C2 as a reference landmark appears valid, combining both OAR sparing and target coverage. Assuming this choice, relevant margins to apply around volumes of interest at the time of planning to take into account for the relative mobility of other regions are discussed. Conclusions: Use of different alignment procedures for treating head and neck patients produced variations in patient setup and dose distribution. With concern for standardizing practice, C1-C2 reference alignment with relevant margins around planning volumes seems to be a valid

  20. Bundle interventions used to reduce prescribing and administration errors in hospitalized children: a systematic review.

    Science.gov (United States)

    Bannan, D F; Tully, M P

    2016-06-01

    Bundle interventions are becoming increasingly used as patient safety interventions. The objective of this study was to describe and categorize which bundle interventions are used to reduce prescribing errors (PEs) and administration errors (AEs) in hospitalized children and to assess the quality of the published literature. Articles published in English and Arabic between 1985 and September 2015 were sought in MEDLINE, EMBASE and CINHAL. Bibliographies of included articles were screened for additional studies. We included any study with a comparator group reporting rates of PEs and AEs. Two authors independently extracted data, classified interventions in each bundle and assessed the studies for potential risk of bias. Constituent interventions of the bundles were categorized using both the Cochrane Effective Practice and Organization of Care Group (EPOC) taxonomy of intervention and the Behavioural Change Wheel (BCW). Seventeen studies met the inclusion criteria. All bundles contained interventions that were either professional, organizational or a mixture of both. According to the BCW, studies used interventions with functions delivering environmental restructuring (17/17), education (16/17), persuasion (4/17), training (3/17), restriction (3/17), incentivization (1/17), coercion (1/17), modelling (1/17) and enablement (1/17). Nine studies had bundles with two intervention functions, and eight studies had three or more intervention functions. All studies were low quality before/after studies. Selection bias varied between studies. Performance bias was either low or unclear. Attrition bias was unclear, and detection bias was rated high in most studies. Ten studies described the interventions fairly well, and seven studies did not adequately explain the interventions used. This novel analysis in a systematic review showed that bundle interventions delivering two or more intervention functions have been investigated but that the study quality was too poor to assess

  1. GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology

    Energy Technology Data Exchange (ETDEWEB)

    Mandelbaum, R.; Rowe, B.; Armstrong, R.; Bard, D.; Bertin, E.; Bosch, J.; Boutigny, D.; Courbin, F.; Dawson, W. A.; Donnarumma, A.; Fenech Conti, I.; Gavazzi, R.; Gentile, M.; Gill, M. S. S.; Hogg, D. W.; Huff, E. M.; Jee, M. J.; Kacprzak, T.; Kilbinger, M.; Kuntzer, T.; Lang, D.; Luo, W.; March, M. C.; Marshall, P. J.; Meyers, J. E.; Miller, L.; Miyatake, H.; Nakajima, R.; Ngole Mboula, F. M.; Nurbaeva, G.; Okura, Y.; Paulin-Henriksson, S.; Rhodes, J.; Schneider, M. D.; Shan, H.; Sheldon, E. S.; Simet, M.; Starck, J. -L.; Sureau, F.; Tewes, M.; Zarb Adami, K.; Zhang, J.; Zuntz, J.

    2015-05-01

    We present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods’ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.

  2. Towards eliminating systematic errors caused by the experimental conditions in Biochemical Methane Potential (BMP) tests

    International Nuclear Information System (INIS)

    Strömberg, Sten; Nistor, Mihaela; Liu, Jing

    2014-01-01

    Highlights: • The evaluated factors introduce significant systematic errors (10–38%) in BMP tests. • Ambient temperature (T) has the most substantial impact (∼10%) at low altitude. • Ambient pressure (p) has the most substantial impact (∼68%) at high altitude. • Continuous monitoring of T and p is not necessary for kinetic calculations. - Abstract: The Biochemical Methane Potential (BMP) test is increasingly recognised as a tool for selecting and pricing biomass material for production of biogas. However, the results for the same substrate often differ between laboratories and much work to standardise such tests is still needed. In the current study, the effects from four environmental factors (i.e. ambient temperature and pressure, water vapour content and initial gas composition of the reactor headspace) on the degradation kinetics and the determined methane potential were evaluated with a 2 4 full factorial design. Four substrates, with different biodegradation profiles, were investigated and the ambient temperature was found to be the most significant contributor to errors in the methane potential. Concerning the kinetics of the process, the environmental factors’ impact on the calculated rate constants was negligible. The impact of the environmental factors on the kinetic parameters and methane potential from performing a BMP test at different geographical locations around the world was simulated by adjusting the data according to the ambient temperature and pressure of some chosen model sites. The largest effect on the methane potential was registered from tests performed at high altitudes due to a low ambient pressure. The results from this study illustrate the importance of considering the environmental factors’ influence on volumetric gas measurement in BMP tests. This is essential to achieve trustworthy and standardised results that can be used by researchers and end users from all over the world

  3. Systematic instrumental errors between oxygen saturation analysers in fetal blood during deep hypoxemia.

    Science.gov (United States)

    Porath, M; Sinha, P; Dudenhausen, J W; Luttkus, A K

    2001-05-01

    During a study of artificially produced deep hypoxemia in fetal cord blood, systematic errors of three different oxygen saturation analysers were evaluated against a reference CO oximeter. The oxygen tensions (PO2) of 83 pre-heparinized fetal blood samples from umbilical veins were reduced by tonometry to 1.3 kPa (10 mm Hg) and 2.7 kPa (20 mm Hg). The oxygen saturation (SO2) was determined (n=1328) on a reference CO oximeter (ABL625, Radiometer Copenhagen) and on three tested instruments (two CO oximeters: Chiron865, Bayer Diagnostics; ABL700, Radiometer Copenhagen, and a portable blood gas analyser, i-STAT, Abbott). The CO oximeters measure the oxyhemoglobin and the reduced hemoglobin fractions by absorption spectrophotometry. The i-STAT system calculates the oxygen saturation from the measured pH, PO2, and PCO2. The measurements were performed in duplicate. Statistical evaluation focused on the differences between duplicate measurements and on systematic instrumental errors in oxygen saturation analysis compared to the reference CO oximeter. After tonometry, the median saturation dropped to 32.9% at a PO2=2.7 kPa (20 mm Hg), defined as saturation range 1, and to 10% SO2 at a PO2=1.3 kPa (10 mm Hg), defined as range 2. With decreasing SO2, all devices showed an increased difference between duplicate measurements. ABL625 and ABL700 showed the closest agreement between instruments (0.25% SO2 bias at saturation range 1 and -0.33% SO2 bias at saturation range 2). Chiron865 indicated higher saturation values than ABL 625 (3.07% SO2 bias at saturation range 1 and 2.28% SO2 bias at saturation range 2). Calculated saturation values (i-STAT) were more than 30% lower than the measured values of ABL625. The disagreement among CO oximeters was small but increasing under deep hypoxemia. Calculation found unacceptably low saturation.

  4. A permutation test to analyse systematic bias and random measurement errors of medical devices via boosting location and scale models.

    Science.gov (United States)

    Mayr, Andreas; Schmid, Matthias; Pfahlberg, Annette; Uter, Wolfgang; Gefeller, Olaf

    2017-06-01

    Measurement errors of medico-technical devices can be separated into systematic bias and random error. We propose a new method to address both simultaneously via generalized additive models for location, scale and shape (GAMLSS) in combination with permutation tests. More precisely, we extend a recently proposed boosting algorithm for GAMLSS to provide a test procedure to analyse potential device effects on the measurements. We carried out a large-scale simulation study to provide empirical evidence that our method is able to identify possible sources of systematic bias as well as random error under different conditions. Finally, we apply our approach to compare measurements of skin pigmentation from two different devices in an epidemiological study.

  5. The Thirty Gigahertz Instrument Receiver for the QUIJOTE Experiment: Preliminary Polarization Measurements and Systematic-Error Analysis

    Directory of Open Access Journals (Sweden)

    Francisco J. Casas

    2015-08-01

    Full Text Available This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process.

  6. Joint position sense error in people with neck pain: A systematic review.

    Science.gov (United States)

    de Vries, J; Ischebeck, B K; Voogt, L P; van der Geest, J N; Janssen, M; Frens, M A; Kleinrensink, G J

    2015-12-01

    Several studies in recent decades have examined the relationship between proprioceptive deficits and neck pain. However, there is no uniform conclusion on the relationship between the two. Clinically, proprioception is evaluated using the Joint Position Sense Error (JPSE), which reflects a person's ability to accurately return his head to a predefined target after a cervical movement. We focused to differentiate between JPSE in people with neck pain compared to healthy controls. Systematic review according to the PRISMA guidelines. Our data sources were Embase, Medline OvidSP, Web of Science, Cochrane Central, CINAHL and Pubmed Publisher. To be included, studies had to compare JPSE of the neck (O) in people with neck pain (P) with JPSE of the neck in healthy controls (C). Fourteen studies were included. Four studies reported that participants with traumatic neck pain had a significantly higher JPSE than healthy controls. Of the eight studies involving people with non-traumatic neck pain, four reported significant differences between the groups. The JPSE did not vary between neck-pain groups. Current literature shows the JPSE to be a relevant measure when it is used correctly. All studies which calculated the JPSE over at least six trials showed a significantly increased JPSE in the neck pain group. This strongly suggests that 'number of repetitions' is a major element in correctly performing the JPSE test. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. The Cosine Error: A Bayesian Procedure for Treating a Non-repetitive Systematic Effect

    Directory of Open Access Journals (Sweden)

    Lira Ignacio

    2016-08-01

    Full Text Available An inconsistency with respect to variable transformations in our previous treatment of the cosine error example with repositioning (Metrologia, vol. 47, pp. R1–R14 is pointed out. The problem refers to the measurement of the vertical height of a column of liquid in a manometer. A systematic effect arises because of the possible deviation of the measurement axis from the vertical, which may be different each time the measurement is taken. A revised procedure for treating this problem is proposed; it consists in straightforward application of Bayesian statistics using a conditional reference prior with partial information. In most practical applications, the numerical differences between the two procedures will be negligible, so the interest of the revised one is mainly of conceptual nature. Nevertheless, similar measurement models may appear in other contexts, for example, in intercomparisons, so the present investigation may serve as a warning to analysts against applying the same methodology we used in our original approach to the present problem.

  8. In-Situ Systematic Error Correction for Digital Volume Correlation Using a Reference Sample

    KAUST Repository

    Wang, B.

    2017-11-27

    The self-heating effect of a laboratory X-ray computed tomography (CT) scanner causes slight change in its imaging geometry, which induces translation and dilatation (i.e., artificial displacement and strain) in reconstructed volume images recorded at different times. To realize high-accuracy internal full-field deformation measurements using digital volume correlation (DVC), these artificial displacements and strains associated with unstable CT imaging must be eliminated. In this work, an effective and easily implemented reference sample compensation (RSC) method is proposed for in-situ systematic error correction in DVC. The proposed method utilizes a stationary reference sample, which is placed beside the test sample to record the artificial displacement fields caused by the self-heating effect of CT scanners. The detected displacement fields are then fitted by a parametric polynomial model, which is used to remove the unwanted artificial deformations in the test sample. Rescan tests of a stationary sample and real uniaxial compression tests performed on copper foam specimens demonstrate the accuracy, efficacy, and practicality of the presented RSC method.

  9. Systematic prediction error correction: a novel strategy for maintaining the predictive abilities of multivariate calibration models.

    Science.gov (United States)

    Chen, Zeng-Ping; Li, Li-Mei; Yu, Ru-Qin; Littlejohn, David; Nordon, Alison; Morris, Julian; Dann, Alison S; Jeffkins, Paul A; Richardson, Mark D; Stimpson, Sarah L

    2011-01-07

    The development of reliable multivariate calibration models for spectroscopic instruments in on-line/in-line monitoring of chemical and bio-chemical processes is generally difficult, time-consuming and costly. Therefore, it is preferable if calibration models can be used for an extended period, without the need to replace them. However, in many process applications, changes in the instrumental response (e.g. owing to a change of spectrometer) or variations in the measurement conditions (e.g. a change in temperature) can cause a multivariate calibration model to become invalid. In this contribution, a new method, systematic prediction error correction (SPEC), has been developed to maintain the predictive abilities of multivariate calibration models when e.g. the spectrometer or measurement conditions are altered. The performance of the method has been tested on two NIR data sets (one with changes in instrumental responses, the other with variations in experimental conditions) and the outcomes compared with those of some popular methods, i.e. global PLS, univariate slope and bias correction (SBC) and piecewise direct standardization (PDS). The results show that SPEC achieves satisfactory analyte predictions with significantly lower RMSEP values than global PLS and SBC for both data sets, even when only a few standardization samples are used. Furthermore, SPEC is simple to implement and requires less information than PDS, which offers advantages for applications with limited data.

  10. Measuring nuclear-spin-dependent parity violation with molecules: Experimental methods and analysis of systematic errors

    Science.gov (United States)

    Altuntaş, Emine; Ammon, Jeffrey; Cahn, Sidney B.; DeMille, David

    2018-04-01

    Nuclear-spin-dependent parity violation (NSD-PV) effects in atoms and molecules arise from Z0 boson exchange between electrons and the nucleus and from the magnetic interaction between electrons and the parity-violating nuclear anapole moment. It has been proposed to study NSD-PV effects using an enhancement of the observable effect in diatomic molecules [D. DeMille et al., Phys. Rev. Lett. 100, 023003 (2008), 10.1103/PhysRevLett.100.023003]. Here we demonstrate highly sensitive measurements of this type, using the test system 138Ba19F. We show that systematic errors associated with our technique can be suppressed to at least the level of the present statistical sensitivity. With ˜170 h of data, we measure the matrix element W of the NSD-PV interaction with uncertainty δ W /(2 π )<0.7 Hz for each of two configurations where W must have different signs. This sensitivity would be sufficient to measure NSD-PV effects of the size anticipated across a wide range of nuclei.

  11. Combined influence of CT random noise and HU-RSP calibration curve nonlinearities on proton range systematic errors

    Science.gov (United States)

    Brousmiche, S.; Souris, K.; Orban de Xivry, J.; Lee, J. A.; Macq, B.; Seco, J.

    2017-11-01

    Proton range random and systematic uncertainties are the major factors undermining the advantages of proton therapy, namely, a sharp dose falloff and a better dose conformality for lower doses in normal tissues. The influence of CT artifacts such as beam hardening or scatter can easily be understood and estimated due to their large-scale effects on the CT image, like cupping and streaks. In comparison, the effects of weakly-correlated stochastic noise are more insidious and less attention is drawn on them partly due to the common belief that they only contribute to proton range uncertainties and not to systematic errors thanks to some averaging effects. A new source of systematic errors on the range and relative stopping powers (RSP) has been highlighted and proved not to be negligible compared to the 3.5% uncertainty reference value used for safety margin design. Hence, we demonstrate that the angular points in the HU-to-RSP calibration curve are an intrinsic source of proton range systematic error for typical levels of zero-mean stochastic CT noise. Systematic errors on RSP of up to 1% have been computed for these levels. We also show that the range uncertainty does not generally vary linearly with the noise standard deviation. We define a noise-dependent effective calibration curve that better describes, for a given material, the RSP value that is actually used. The statistics of the RSP and the range continuous slowing down approximation (CSDA) have been analytically derived for the general case of a calibration curve obtained by the stoichiometric calibration procedure. These models have been validated against actual CSDA simulations for homogeneous and heterogeneous synthetical objects as well as on actual patient CTs for prostate and head-and-neck treatment planning situations.

  12. Analysis of a systematic search-based algorithm for determining protein backbone structure from a minimum number of residual dipolar couplings.

    Science.gov (United States)

    Wang, Lincong; Donald, Bruce Randall

    2004-01-01

    We have developed an ab initio algorithm for determining a protein backbone structure using global orientational restraints on internuclear vectors derived from residual dipolar couplings (RDCs) measured in one or two different aligning media by solution nuclear magnetic resonance (NMR) spectroscopy [14, 15]. Specifically, the conformation and global orientations of individual secondary structure elements are computed, independently, by an exact solution, systematic search-based minimization algorithm using only 2 RDCs per residue. The systematic search is built upon a quartic equation for computing, exactly and in constant time, the directions of an internuclear vector from RDCs, and linear or quadratic equations for computing the sines and cosines of backbone dihedral (phi, psi) angles from two vectors in consecutive peptide planes. In contrast to heuristic search such as simulated annealing (SA) or Monte-Carlo (MC) used by other NMR structure determination algorithms, our minimization algorithm can be analyzed rigorously in terms of expected algorithmic complexity and the coordinate precision of the protein structure as a function of error in the input data. The algorithm has been successfully applied to compute the backbone structures of three proteins using real NMR data.

  13. How are medication errors defined? A systematic literature review of definitions and characteristics

    DEFF Research Database (Denmark)

    Lisby, Marianne; Nielsen, L P; Brock, Birgitte

    2010-01-01

    Multiplicity in terminology has been suggested as a possible explanation for the variation in the prevalence of medication errors. So far, few empirical studies have challenged this assertion. The objective of this review was, therefore, to describe the extent and characteristics of medication...... error definitions in hospitals and to consider the consequences for measuring the prevalence of medication errors....

  14. Strategies to reduce the systematic error due to tumor and rectum motion in radiotherapy of prostate cancer

    International Nuclear Information System (INIS)

    Hoogeman, Mischa S.; Herk, Marcel van; Bois, Josien de; Lebesque, Joos V.

    2005-01-01

    Background and purpose: The goal of this work is to develop and evaluate strategies to reduce the uncertainty in the prostate position and rectum shape that arises in the preparation stage of the radiation treatment of prostate cancer. Patients and methods: Nineteen prostate cancer patients, who were treated with 3-dimensional conformal radiotherapy, received each a planning CT scan and 8-13 repeat CT scans during the treatment period. We quantified prostate motion relative to the pelvic bone by first matching the repeat CT scans on the planning CT scan using the bony anatomy. Subsequently, each contoured prostate, including seminal vesicles, was matched on the prostate in the planning CT scan to obtain the translations and rotations. The variation in prostate position was determined in terms of the systematic, random and group mean error. We tested the performance of two correction strategies to reduce the systematic error due to prostate motion. The first strategy, the pre-treatment strategy, used only the initial rectum volume in the planning CT scan to adjust the angle of the prostate with respect to the left-right (LR) axis and the shape and position of the rectum. The second strategy, the adaptive strategy, used the data of repeat CT scans to improve the estimate of the prostate position and rectum shape during the treatment. Results: The largest component of prostate motion was a rotation around the LR axis. The systematic error (1 SD) was 5.1 deg and the random error was 3.6 deg (1 SD). The average LR-axis rotation between the planning and the repeat CT scans correlated significantly with the rectum volume in the planning CT scan (r=0.86, P<0.0001). Correction of the rotational position on the basis of the planning rectum volume alone reduced the systematic error by 28%. A correction, based on the data of the planning CT scan and 4 repeat CT scans reduced the systematic error over the complete treatment period by a factor of 2. When the correction was

  15. Scale interactions on diurnal toseasonal timescales and their relevanceto model systematic errors

    Directory of Open Access Journals (Sweden)

    G. Yang

    2003-06-01

    Full Text Available Examples of current research into systematic errors in climate models are used to demonstrate the importance of scale interactions on diurnal,intraseasonal and seasonal timescales for the mean and variability of the tropical climate system. It has enabled some conclusions to be drawn about possible processes that may need to be represented, and some recommendations to be made regarding model improvements. It has been shown that the Maritime Continent heat source is a major driver of the global circulation but yet is poorly represented in GCMs. A new climatology of the diurnal cycle has been used to provide compelling evidence of important land-sea breeze and gravity wave effects, which may play a crucial role in the heat and moisture budget of this key region for the tropical and global circulation. The role of the diurnal cycle has also been emphasized for intraseasonal variability associated with the Madden Julian Oscillation (MJO. It is suggested that the diurnal cycle in Sea Surface Temperature (SST during the suppressed phase of the MJO leads to a triggering of cumulus congestus clouds, which serve to moisten the free troposphere and hence precondition the atmosphere for the next active phase. It has been further shown that coupling between the ocean and atmosphere on intraseasonal timescales leads to a more realistic simulation of the MJO. These results stress the need for models to be able to simulate firstly, the observed tri-modal distribution of convection, and secondly, the coupling between the ocean and atmosphere on diurnal to intraseasonal timescales. It is argued, however, that the current representation of the ocean mixed layer in coupled models is not adequate to represent the complex structure of the observed mixed layer, in particular the formation of salinity barrier layers which can potentially provide much stronger local coupling between the atmosphere and ocean on diurnal to intraseasonal timescales.

  16. Prevalence and nature of medication administration errors in health care settings: a systematic review of direct observational evidence.

    Science.gov (United States)

    Keers, Richard N; Williams, Steven D; Cooke, Jonathan; Ashcroft, Darren M

    2013-02-01

    To systematically review empirical evidence on the prevalence and nature of medication administration errors (MAEs) in health care settings. Ten electronic databases (MEDLINE, EMBASE, International Pharmaceutical Abstracts, Scopus, Applied Social Sciences Index and Abstracts, PsycINFO, Cochrane Reviews and Trials, British Nursing Index, Cumulative Index to Nursing and Allied Health Literature, and Health Management Information Consortium) were searched (1985-May 2012). English-language publications reporting MAE data using the direct observation method were included, providing an error rate could be determined. Reference lists of all included articles were screened for additional studies. In all, 91 unique studies were included. The median error rate (interquartile range) was 19.6% (8.6-28.3%) of total opportunities for error including wrong-time errors and 8.0% (5.1-10.9%) without timing errors, when each dose could be considered only correct or incorrect. The median rate of error when more than 1 error could be counted per dose was 25.6% (20.8-41.7%) and 20.7% (9.7-30.3%), excluding wrong-time errors. A higher median MAE rate was observed for the intravenous route (53.3% excluding timing errors (IQR 26.6-57.9%)) compared to when all administration routes were studied (20.1%; 9.0-24.6%), where each dose could accumulate more than one error. Studies consistently reported wrong time, omission, and wrong dosage among the 3 most common MAE subtypes. Common medication groups associated with MAEs were those affecting nutrition and blood, gastrointestinal system, cardiovascular system, central nervous system, and antiinfectives. Medication administration error rates varied greatly as a product of differing medication error definitions, data collection methods, and settings of included studies. Although MAEs remained a common occurrence in health care settings throughout the time covered by this review, potential targets for intervention to minimize MAEs were identified

  17. Systematic errors in the readings of track etch neutron dosemeters caused by the energy dependence of response

    CERN Document Server

    Tanner, R J; Bartlett, D T; Horwood, N

    1999-01-01

    A study has been performed to assess the extent to which variations in the energy dependence of response of neutron personal dosemeters can cause systematic errors in readings obtained in workplace fields. This involved a detailed determination of the response functions of personal dosemeters used in the UK. These response functions were folded with workplace spectra to ascertain the under- or over-response in workplace fields.

  18. Galaxy Cluster Shapes and Systematic Errors in H_0 as Determined by the Sunyaev-Zel'dovich Effect

    Science.gov (United States)

    Sulkanen, Martin E.; Patel, Sandeep K.

    1998-01-01

    Imaging of the Sunyaev-Zeldovich (SZ) effect in galaxy clusters combined with cluster plasma x-ray diagnostics promises to measure the cosmic distance scale to high accuracy. However, projecting the inverse-Compton scattering and x-ray emission along the cluster line-of-sight will introduce systematic error's in the Hubble constant, H_0, because the true shape of the cluster is not known. In this paper we present a study of the systematic errors in the value of H_0, as determined by the x-ray and SZ properties of theoretical samples of triaxial isothermal "beta-model" clusters, caused by projection effects and observer orientation relative to the model clusters' principal axes. We calculate three estimates for H_0 for each cluster, based on their large and small apparent angular core radii, and their arithmetic mean. We average the estimates for H_0 for a sample of 25 clusters and find that the estimates have limited systematic error: the 99.7% confidence intervals for the mean estimated H_0 analyzing the clusters using either their large or mean angular core r;dius are within 14% of the "true" (assumed) value of H_0 (and enclose it), for a triaxial beta model cluster sample possessing a distribution of apparent x-ray cluster ellipticities consistent with that of observed x-ray clusters.

  19. A multi-sensor burned area algorithm for crop residue burning in northwestern India: validation and sources of error

    Science.gov (United States)

    Liu, T.; Marlier, M. E.; Karambelas, A. N.; Jain, M.; DeFries, R. S.

    2017-12-01

    A leading source of outdoor emissions in northwestern India comes from crop residue burning after the annual monsoon (kharif) and winter (rabi) crop harvests. Agricultural burned area, from which agricultural fire emissions are often derived, can be poorly quantified due to the mismatch between moderate-resolution satellite sensors and the relatively small size and short burn period of the fires. Many previous studies use the Global Fire Emissions Database (GFED), which is based on the Moderate Resolution Imaging Spectroradiometer (MODIS) burned area product MCD64A1, as an outdoor fires emissions dataset. Correction factors with MODIS active fire detections have previously attempted to account for small fires. We present a new burned area classification algorithm that leverages more frequent MODIS observations (500 m x 500 m) with higher spatial resolution Landsat (30 m x 30 m) observations. Our approach is based on two-tailed Normalized Burn Ratio (NBR) thresholds, abbreviated as ModL2T NBR, and results in an estimated 104 ± 55% higher burned area than GFEDv4.1s (version 4, MCD64A1 + small fires correction) in northwestern India during the 2003-2014 winter (October to November) burning seasons. Regional transport of winter fire emissions affect approximately 63 million people downwind. The general increase in burned area (+37% from 2003-2007 to 2008-2014) over the study period also correlates with increased mechanization (+58% in combine harvester usage from 2001-2002 to 2011-2012). Further, we find strong correlation between ModL2T NBR-derived burned area and results of an independent survey (r = 0.68) and previous studies (r = 0.92). Sources of error arise from small median landholding sizes (1-3 ha), heterogeneous spatial distribution of two dominant burning practices (partial and whole field), coarse spatio-temporal satellite resolution, cloud and haze cover, and limited Landsat scene availability. The burned area estimates of this study can be used to build

  20. Interventions to reduce nurses' medication administration errors in inpatient settings: A systematic review and meta-analysis.

    Science.gov (United States)

    Berdot, Sarah; Roudot, Marjorie; Schramm, Catherine; Katsahian, Sandrine; Durieux, Pierre; Sabatier, Brigitte

    2016-01-01

    Serious medication administration errors are common in hospitals. Various interventions, including barcode-based technologies, have been developed to help prevent such errors. This systematic review and this meta-analysis focus on the efficacy of interventions for reducing medication administration errors. The types of error and their gravity were also studied. MEDLINE, EMBASE, the Cochrane Library and reference lists of relevant articles published between January 1975 and August 2014 were searched, without language restriction. Randomized controlled trials, interrupted time-series studies, non-randomized controlled trials and controlled before-and-after studies were included. Studies evaluating interventions for decreasing administration errors based on total opportunity for error method were included. Nurses administering medications to adult or child inpatients were considered eligible as participants. Two reviewers independently assessed studies for eligibility, extracted data and assessed the risk of bias. The main outcome was the error rate without wrong-time errors measured at study level. A random effects model was used to evaluate the effects of interventions on administration errors. 5312 records from electronic database searches were identified. Seven studies were included: five were randomized controlled trials (including one crossover trial) and two were non-randomized controlled trials. Interventions were training-related (n=4; dedicated medication nurses, interactive CD-ROM program, simulation-based learning, pharmacist-led training program), and technology-related (n=3; computerized prescribing and automated medication dispensing systems). All studies were subject to a high risk of bias, mostly due to a lack of blinding to outcome assessment and a risk of contamination. No difference between the control group and the intervention group was found (OR=0.72 [0.39; 1.34], p=0.3). No fatal error was observed in the three studies evaluating the gravity of

  1. Doppler imaging of chemical spots on magnetic Ap/Bp stars. Numerical tests and assessment of systematic errors

    Science.gov (United States)

    Kochukhov, O.

    2017-01-01

    Context. Doppler imaging (DI) is a powerful spectroscopic inversion technique that enables conversion of a line profile time series into a two-dimensional map of the stellar surface inhomogeneities. DI has been repeatedly applied to reconstruct chemical spot topologies of magnetic Ap/Bp stars with the goal of understanding variability of these objects and gaining an insight into the physical processes responsible for spot formation. Aims: In this paper we investigate the accuracy of chemical abundance DI and assess the impact of several different systematic errors on the reconstructed spot maps. Methods: We have simulated spectroscopic observational data for two different Fe spot distributions with a surface abundance contrast of 1.5 dex in the presence of a moderately strong dipolar magnetic field. We then reconstructed chemical maps using different sets of spectral lines and making different assumptions about line formation in the inversion calculations. Results: Our numerical experiments demonstrate that a modern DI code successfully recovers the input chemical spot distributions comprised of multiple circular spots at different latitudes or an element overabundance belt at the magnetic equator. For the optimal reconstruction based on half a dozen spectral intervals, the average reconstruction errors do not exceed 0.10 dex. The errors increase to about 0.15 dex when abundance distributions are recovered from a few and/or blended spectral lines. Ignoring a 2.5 kG dipolar magnetic field in chemical abundance DI leads to an average relative error of 0.2 dex and maximum errors of 0.3 dex. Similar errors are encountered if a DI inversion is carried out neglecting a non-uniform continuum brightness distribution and variation of the local atmospheric structure. None of the considered systematic effects lead to major spurious features in the recovered abundance maps. Conclusions: This series of numerical DI simulations proves that inversions based on one or two spectral

  2. Using Analysis Increments (AI) to Estimate and Correct Systematic Errors in the Global Forecast System (GFS) Online

    Science.gov (United States)

    Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.

    2017-12-01

    Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub

  3. Autofocus Correction of Azimuth Phase Error and Residual Range Cell Migration in Spotlight SAR Polar Format Imagery

    OpenAIRE

    Mao, Xinhua; Zhu, Daiyin; Zhu, Zhaoda

    2012-01-01

    Synthetic aperture radar (SAR) images are often blurred by phase perturbations induced by uncompensated sensor motion and /or unknown propagation effects caused by turbulent media. To get refocused images, autofocus proves to be useful post-processing technique applied to estimate and compensate the unknown phase errors. However, a severe drawback of the conventional autofocus algorithms is that they are only capable of removing one-dimensional azimuth phase errors (APE). As the resolution be...

  4. Residual position errors of lymph node surrogates in breast cancer adjuvant radiotherapy: Comparison of two arm fixation devices and the effect of arm position correction

    International Nuclear Information System (INIS)

    Kapanen, Mika; Laaksomaa, Marko; Skyttä, Tanja; Haltamo, Mikko; Pehkonen, Jani; Lehtonen, Turkka; Kellokumpu-Lehtinen, Pirkko-Liisa; Hyödynmaa, Simo

    2016-01-01

    Residual position errors of the lymph node (LN) surrogates and humeral head (HH) were determined for 2 different arm fixation devices in radiotherapy (RT) of breast cancer: a standard wrist-hold (WH) and a house-made rod-hold (RH). The effect of arm position correction (APC) based on setup images was also investigated. A total of 113 consecutive patients with early-stage breast cancer with LN irradiation were retrospectively analyzed (53 and 60 using the WH and RH, respectively). Residual position errors of the LN surrogates (Th1-2 and clavicle) and the HH were investigated to compare the 2 fixation devices. The position errors and setup margins were determined before and after the APC to investigate the efficacy of the APC in the treatment situation. A threshold of 5 mm was used for the residual errors of the clavicle and Th1-2 to perform the APC, and a threshold of 7 mm was used for the HH. The setup margins were calculated with the van Herk formula. Irradiated volumes of the HH were determined from RT treatment plans. With the WH and the RH, setup margins up to 8.1 and 6.7 mm should be used for the LN surrogates, and margins up to 4.6 and 3.6 mm should be used to spare the HH, respectively, without the APC. After the APC, the margins of the LN surrogates were equal to or less than 7.5/6.0 mm with the WH/RH, but margins up to 4.2/2.9 mm were required for the HH. The APC was needed at least once with both the devices for approximately 60% of the patients. With the RH, irradiated volume of the HH was approximately 2 times more than with the WH, without any dose constraints. Use of the RH together with the APC resulted in minimal residual position errors and setup margins for all the investigated bony landmarks. Based on the obtained results, we prefer the house-made RH. However, more attention should be given to minimize the irradiation of the HH with the RH than with the WH.

  5. Effect of critical care pharmacist's intervention on medication errors: A systematic review and meta-analysis of observational studies.

    Science.gov (United States)

    Wang, Tiansheng; Benedict, Neal; Olsen, Keith M; Luan, Rong; Zhu, Xi; Zhou, Ningning; Tang, Huilin; Yan, Yingying; Peng, Yao; Shi, Luwen

    2015-10-01

    Pharmacists are integral members of the multidisciplinary team for critically ill patients. Multiple nonrandomized controlled studies have evaluated the outcomes of pharmacist interventions in the intensive care unit (ICU). This systematic review focuses on controlled clinical trials evaluating the effect of pharmacist intervention on medication errors (MEs) in ICU settings. Two independent reviewers searched Medline, Embase, and Cochrane databases. The inclusion criteria were nonrandomized controlled studies that evaluated the effect of pharmacist services vs no intervention on ME rates in ICU settings. Four studies were included in the meta-analysis. Results suggest that pharmacist intervention has no significant contribution to reducing general MEs, although pharmacist intervention may significantly reduce preventable adverse drug events and prescribing errors. This meta-analysis highlights the need for high-quality studies to examine the effect of the critical care pharmacist. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Dosimetric impact of systematic MLC positional errors on step and shoot IMRT for prostate cancer: a planning study

    International Nuclear Information System (INIS)

    Ung, N.M.; Harper, C.S.; Wee, L.

    2011-01-01

    Full text: The positional accuracy of multileaf collimators (MLC) is crucial in ensuring precise delivery of intensity-modulated radiotherapy (IMRT). The aim of this planning study was to investigate the dosimetric impact of systematic MLC positional errors on step and shoot IMRT of prostate cancer. A total of 12 perturbations of MLC leaf banks were introduced to six prostate IMRT treatment plans to simulate MLC systematic positional errors. Dose volume histograms (DVHs) were generated for the extraction of dose endpoint parameters. Plans were evaluated in terms of changes to the defined endpoint dose parameters, conformity index (CI) and healthy tissue avoidance (HTA) to planning target volume (PTV), rectum and bladder. Negative perturbations of MLC had been found to produce greater changes to endpoint dose parameters than positive perturbations of MLC (p 9 5 of -1.2 and 0.9% respectively. Negative and positive synchronised MLC perturbations of I mm in one direction resulted in median changes in D 9 5 of -2.3 and 1.8% respectively. Doses to rectum were generally more sensitive to systematic MLC en-ors compared to bladder (p < 0.01). Negative and positive synchronised MLC perturbations of I mm in one direction resulted in median changes in endpoint dose parameters of rectum and bladder from 1.0 to 2.5%. Maximum reduction of -4.4 and -7.3% were recorded for conformity index (CI) and healthy tissue avoidance (HT A) respectively due to synchronised MLC perturbation of 1 mm. MLC errors resulted in dosimetric changes in IMRT plans for prostate. (author)

  7. Refractive error and risk of early or late age-related macular degeneration: a systematic review and meta-analysis.

    Science.gov (United States)

    Li, Ying; Wang, Jiwen; Zhong, Xiaojing; Tian, Zhen; Wu, Peipei; Zhao, Wenbo; Jin, Chenjin

    2014-01-01

    To summarize relevant evidence investigating the associations between refractive error and age-related macular degeneration (AMD). Systematic review and meta-analysis. We searched Medline, Web of Science, and Cochrane databases as well as the reference lists of retrieved articles to identify studies that met the inclusion criteria. Extracted data were combined using a random-effects meta-analysis. Studies that were pertinent to our topic but did not meet the criteria for quantitative analysis were reported in a systematic review instead. Pooled odds ratios (ORs) and 95% confidence intervals (CIs) for the associations between refractive error (hyperopia, myopia, per-diopter increase in spherical equivalent [SE] toward hyperopia, per-millimeter increase in axial length [AL]) and AMD (early and late, prevalent and incident). Fourteen studies comprising over 5800 patients were eligible. Significant associations were found between hyperopia, myopia, per-diopter increase in SE, per-millimeter increase in AL, and prevalent early AMD. The pooled ORs and 95% CIs were 1.13 (1.06-1.20), 0.75 (0.56-0.94), 1.10 (1.07-1.14), and 0.79 (0.73-0.85), respectively. The per-diopter increase in SE was also significantly associated with early AMD incidence (OR, 1.06; 95% CI, 1.02-1.10). However, no significant association was found between hyperopia or myopia and early AMD incidence. Furthermore, neither prevalent nor incident late AMD was associated with refractive error. Considerable heterogeneity was found among studies investigating the association between myopia and prevalent early AMD (P = 0.001, I2 = 72.2%). Geographic location might play a role; the heterogeneity became non-significant after stratifying these studies into Asian and non-Asian subgroups. Refractive error is associated with early AMD but not with late AMD. More large-scale longitudinal studies are needed to further investigate such associations.

  8. Systematic Errors in Stereo PIV When Imaging through a Glass Window

    Science.gov (United States)

    Green, Richard; McAlister, Kenneth W.

    2004-01-01

    This document assesses the magnitude of velocity measurement errors that may arise when performing stereo particle image velocimetry (PIV) with cameras viewing through thick, refractive window and where the calibration is performed in one plane only. The effect of the window is to introduce a refractive error that increases with window thickness and the camera angle of incidence. The calibration should be performed while viewing through the test section window, otherwise a potentially significant error may be introduced that affects each velocity component differently. However, even when the calibration is performed correctly, another error may arise during the stereo reconstruction if the perspective angle determined for each camera does not account for the displacement of the light rays as they refract through the thick window. Care should be exercised when applying in a single-plane calibration since certain implicit assumptions may in fact require conditions that are extremely difficult to meet in a practical laboratory environment. It is suggested that the effort expended to ensure this accuracy may be better expended in performing a more lengthy volumetric calibration procedure, which does not rely upon the assumptions implicit in the single plane method and avoids the need for the perspective angle to be calculated.

  9. A systematic framework for Monte Carlo simulation of remote sensing errors map in carbon assessments

    Science.gov (United States)

    S. Healey; P. Patterson; S. Urbanski

    2014-01-01

    Remotely sensed observations can provide unique perspective on how management and natural disturbance affect carbon stocks in forests. However, integration of these observations into formal decision support will rely upon improved uncertainty accounting. Monte Carlo (MC) simulations offer a practical, empirical method of accounting for potential remote sensing errors...

  10. Integrated Sachs-Wolfe map reconstruction in the presence of systematic errors

    Science.gov (United States)

    Weaverdyck, Noah; Muir, Jessica; Huterer, Dragan

    2018-02-01

    The decay of gravitational potentials in the presence of dark energy leads to an additional, late-time contribution to anisotropies in the cosmic microwave background (CMB) at large angular scales. The imprint of this so-called integrated Sachs-Wolfe (ISW) effect to the CMB angular power spectrum has been detected and studied in detail, but reconstructing its spatial contributions to the CMB map, which would offer the tantalizing possibility of separating the early- from the late-time contributions to CMB temperature fluctuations, is more challenging. Here, we study the technique for reconstructing the ISW map based on information from galaxy surveys and focus in particular on how its accuracy is impacted by the presence of photometric calibration errors in input galaxy maps, which were previously found to be a dominant contaminant for ISW signal estimation. We find that both including tomographic information from a single survey and using data from multiple, complementary galaxy surveys improve the reconstruction by mitigating the impact of spurious power contributions from calibration errors. A high-fidelity reconstruction further requires one to account for the contribution of calibration errors to the observed galaxy power spectrum in the model used to construct the ISW estimator. We find that if the photometric calibration errors in galaxy surveys can be independently controlled at the level required to obtain unbiased dark energy constraints, then it is possible to reconstruct ISW maps with excellent accuracy using a combination of maps from two galaxy surveys with properties similar to Euclid and SPHEREx.

  11. Systematic analysis of dependent human errors from the maintenance history at finnish NPPs - A status report

    International Nuclear Information System (INIS)

    Laakso, K.

    2002-12-01

    Operating experience has shown missed detection events, where faults have passed inspections and functional tests to operating periods after the maintenance activities during the outage. The causes of these failures have often been complex event sequences, involving human and organisational factors. Especially common cause and other dependent failures of safety systems may significantly contribute to the reactor core damage risk. The topic has been addressed in the Finnish studies of human common cause failures, where experiences on latent human errors have been searched and analysed in detail from the maintenance history. The review of the bulk of the analysis results of the Olkiluoto and Loviisa plant sites shows that the instrumentation and control and electrical equipment is more prone to human error caused failure events than the other maintenance and that plant modifications and also predetermined preventive maintenance are significant sources of common cause failures. Most errors stem from the refuelling and maintenance outage period at the both sites, and less than half of the dependent errors were identified during the same outage. The dependent human errors originating from modifications could be reduced by a more tailored specification and coverage of their start-up testing programs. Improvements could also be achieved by a more case specific planning of the installation inspection and functional testing of complicated maintenance works or work objects of higher plant safety and availability importance. A better use and analysis of condition monitoring information for maintenance steering could also help. The feedback from discussions of the analysis results with plant experts and professionals is still crucial in developing the final conclusions and recommendations that meet the specific development needs at the plants. (au)

  12. Galaxy Cluster Shapes and Systematic Errors in the Hubble Constant as Determined by the Sunyaev-Zel'dovich Effect

    Science.gov (United States)

    Sulkanen, Martin E.; Joy, M. K.; Patel, S. K.

    1998-01-01

    Imaging of the Sunyaev-Zei'dovich (S-Z) effect in galaxy clusters combined with the cluster plasma x-ray diagnostics can measure the cosmic distance scale to high accuracy. However, projecting the inverse-Compton scattering and x-ray emission along the cluster line-of-sight will introduce systematic errors in the Hubble constant, H$-O$, because the true shape of the cluster is not known. This effect remains present for clusters that are otherwise chosen to avoid complications for the S-Z and x-ray analysis, such as plasma temperature variations, cluster substructure, or cluster dynamical evolution. In this paper we present a study of the systematic errors in the value of H$-0$, as determined by the x-ray and S-Z properties of a theoretical sample of triaxial isothermal 'beta-model' clusters, caused by projection effects and observer orientation relative to the model clusters' principal axes. The model clusters are not generated as ellipsoids of rotation, but have three independent 'core radii', as well as a random orientation to the plane of the sky.

  13. A systematic review of patient medication error on self-administering medication at home.

    Science.gov (United States)

    Mira, José Joaquín; Lorenzo, Susana; Guilabert, Mercedes; Navarro, Isabel; Pérez-Jover, Virtudes

    2015-06-01

    Medication errors have been analyzed as a health professionals' responsibility (due to mistakes in prescription, preparation or dispensing). However, sometimes, patients themselves (or their caregivers) make mistakes in the administration of the medication. The epidemiology of patient medication errors (PEs) has been scarcely reviewed in spite of its impact on people, on therapeutic effectiveness and on incremental cost for the health systems. This study reviews and describes the methodological approaches and results of published studies on the frequency, causes and consequences of medication errors committed by patients at home. A review of research articles published between 1990 and 2014 was carried out using MEDLINE, Web-of-Knowledge, Scopus, Tripdatabase and Index Medicus. The frequency of PE was situated between 19 and 59%. The elderly and the preschooler population constituted a higher number of mistakes than others. The most common were: incorrect dosage, forgetting, mixing up medications, failing to recall indications and taking out-of-date or inappropriately stored drugs. The majority of these mistakes have no negative consequences. Health literacy, information and communication and complexity of use of dispensing devices were identified as causes of PEs. Apps and other new technologies offer several opportunities for improving drug safety.

  14. Systematic Errors of the Efficiency Tracer Technique for Measuring the Absolute Disintegration Rates of Pure Beta Emitters

    International Nuclear Information System (INIS)

    Williams, A.; Goodier, I.W.

    1967-01-01

    A basic requirement of, the theory of the efficiency tracer technique is the generally accepted assumption that there is a linear relationship between the efficiencies of the pure β-emitter and the tracer. However, an estimate of the inherent accuracy of the efficiency tracer technique has shown that, on theoretical grounds, this linear relationship would only be expected if the end-point energies and the shape of the β-spectra of the tracer and pure β-emitter were identical, the departure from linearity depending upon the ratio of the respective end-point energies. An experimentally determined value of the absolute disintegration rate of the pure emitter, obtained using a linear relationship, would have a significant systematic error if this relationship were in fact non-linear, for the usual straight-line extrapolation to 100% efficiency for the tracer would have to be replaced by an extrapolation with a significant curvature. To look for any non-linearity in the relationship it is first necessary to reduce the random measurement errors to a minimum. The first part of the paper contains a derivation of an expression for the expected value of these random errors in terms of the known statistical errors in the measurement. This expression shows that the ratio of the pure β-emitter and tracer activities can be chosen to make the random errors a minimum. The second part of the paper shows that it is possible to obtain an experimental error, which is comparable to that predicted in the expression derived above, for a pure β-emitter and tracer, combined in the same chemical form, whose end-point energies are similar (e.g. 32 P and 24 Na). To look for any non-linearity in the relationship between pure β-emitter and tracer efficiencies, 35 S (end-point energy E 0 = 168 keV) was measured with 60 Co(E 0 = 310 keV) and 134 Cs (effective E 0 = 110 keV) as tracers. The results of these measurements showed that there was a significant curvature, of opposite sign, for the

  15. Systematic errors in global air-sea CO2 flux caused by temporal averaging of sea-level pressure

    Directory of Open Access Journals (Sweden)

    H. Kettle

    2005-01-01

    Full Text Available Long-term temporal averaging of meteorological data, such as wind speed and air pressure, can cause large errors in air-sea carbon flux estimates. Other researchers have already shown that time averaging of wind speed data creates large errors in flux due to the non-linear dependence of the gas transfer velocity on wind speed (Bates and Merlivat, 2001. However, in general, wind speed is negatively correlated with air pressure, and a given fractional change in the pressure of dry air produces an equivalent fractional change in the atmospheric partial pressure of carbon dioxide (pCO2air. Thus low pressure systems cause a drop in pCO2air, which together with the associated high winds, promotes outgassing/reduces uptake of CO2 from the ocean. Here we quantify the errors in global carbon flux estimates caused by using monthly or climatological pressure data to calculate pCO2air (and thus ignoring the covariance of wind and pressure over the period 1990-1999, using two common parameterisations for gas transfer velocity. Results show that on average, compared with estimates made using 6 hourly pressure data, the global oceanic sink is systematically overestimated by 7% (W92 and 10% (WM99 when monthly mean pressure is used, and 9% (W92 and 12% (WM99 when climatological pressure is used.

  16. The application of SHERPA (Systematic Human Error Reduction and Prediction Approach) in the development of compensatory cognitive rehabilitation strategies for stroke patients with left and right brain damage.

    Science.gov (United States)

    Hughes, Charmayne M L; Baber, Chris; Bienkiewicz, Marta; Worthington, Andrew; Hazell, Alexa; Hermsdörfer, Joachim

    2015-01-01

    Approximately 33% of stroke patients have difficulty performing activities of daily living, often committing errors during the planning and execution of such activities. The objective of this study was to evaluate the ability of the human error identification (HEI) technique SHERPA (Systematic Human Error Reduction and Prediction Approach) to predict errors during the performance of daily activities in stroke patients with left and right hemisphere lesions. Using SHERPA we successfully predicted 36 of the 38 observed errors, with analysis indicating that the proportion of predicted and observed errors was similar for all sub-tasks and severity levels. HEI results were used to develop compensatory cognitive strategies that clinicians could employ to reduce or prevent errors from occurring. This study provides evidence for the reliability and validity of SHERPA in the design of cognitive rehabilitation strategies in stroke populations.

  17. Errors, lies and misunderstandings: Systematic review on behavioural decision making in projects

    DEFF Research Database (Denmark)

    Stingl, Verena; Geraldi, Joana

    2017-01-01

    in projects and beyond. However, the literature is fragmented and draws only on a fraction of the recent, insightful, and relevant developments on behavioural decision making. This paper organizes current research in a conceptual framework rooted in three schools of thinking—reductionist (on cognitive...... limitations—errors), pluralist (on political behaviour—lies), and contextualist (on social and organizational sensemaking—misunderstandings). Our review suggests avenues for future research with a wider coverage of theories in cognitive and social psychology and critical and mindful integration of findings...

  18. The curious anomaly of skewed judgment distributions and systematic error in the wisdom of crowds.

    Directory of Open Access Journals (Sweden)

    Ulrik W Nash

    Full Text Available Judgment distributions are often skewed and we know little about why. This paper explains the phenomenon of skewed judgment distributions by introducing the augmented quincunx (AQ model of sequential and probabilistic cue categorization by neurons of judges. In the process of developing inferences about true values, when neurons categorize cues better than chance, and when the particular true value is extreme compared to what is typical and anchored upon, then populations of judges form skewed judgment distributions with high probability. Moreover, the collective error made by these people can be inferred from how skewed their judgment distributions are, and in what direction they tilt. This implies not just that judgment distributions are shaped by cues, but that judgment distributions are cues themselves for the wisdom of crowds. The AQ model also predicts that judgment variance correlates positively with collective error, thereby challenging what is commonly believed about how diversity and collective intelligence relate. Data from 3053 judgment surveys about US macroeconomic variables obtained from the Federal Reserve Bank of Philadelphia and the Wall Street Journal provide strong support, and implications are discussed with reference to three central ideas on collective intelligence, these being Galton's conjecture on the distribution of judgments, Muth's rational expectations hypothesis, and Page's diversity prediction theorem.

  19. Diagnostic and therapeutic errors in trigeminal autonomic cephalalgias and hemicrania continua: a systematic review

    Science.gov (United States)

    2013-01-01

    Trigeminal autonomic cephalalgias (TACs) and hemicrania continua (HC) are relatively rare but clinically rather well-defined primary headaches. Despite the existence of clear-cut diagnostic criteria (The International Classification of Headache Disorders, 2nd edition - ICHD-II) and several therapeutic guidelines, errors in workup and treatment of these conditions are frequent in clinical practice. We set out to review all available published data on mismanagement of TACs and HC patients in order to understand and avoid its causes. The search strategy identified 22 published studies. The most frequent errors described in the management of patients with TACs and HC are: referral to wrong type of specialist, diagnostic delay, misdiagnosis, and the use of treatments without overt indication. Migraine with and without aura, trigeminal neuralgia, sinus infection, dental pain and temporomandibular dysfunction are the disorders most frequently overdiagnosed. Even when the clinical picture is clear-cut, TACs and HC are frequently not recognized and/or mistaken for other disorders, not only by general physicians, dentists and ENT surgeons, but also by neurologists and headache specialists. This seems to be due to limited knowledge of the specific characteristics and variants of these disorders, and it results in the unnecessary prescription of ineffective and sometimes invasive treatments which may have negative consequences for patients. Greater knowledge of and education about these disorders, among both primary care physicians and headache specialists, might contribute to improving the quality of life of TACs and HC patients. PMID:23565739

  20. The curious anomaly of skewed judgment distributions and systematic error in the wisdom of crowds.

    Science.gov (United States)

    Nash, Ulrik W

    2014-01-01

    Judgment distributions are often skewed and we know little about why. This paper explains the phenomenon of skewed judgment distributions by introducing the augmented quincunx (AQ) model of sequential and probabilistic cue categorization by neurons of judges. In the process of developing inferences about true values, when neurons categorize cues better than chance, and when the particular true value is extreme compared to what is typical and anchored upon, then populations of judges form skewed judgment distributions with high probability. Moreover, the collective error made by these people can be inferred from how skewed their judgment distributions are, and in what direction they tilt. This implies not just that judgment distributions are shaped by cues, but that judgment distributions are cues themselves for the wisdom of crowds. The AQ model also predicts that judgment variance correlates positively with collective error, thereby challenging what is commonly believed about how diversity and collective intelligence relate. Data from 3053 judgment surveys about US macroeconomic variables obtained from the Federal Reserve Bank of Philadelphia and the Wall Street Journal provide strong support, and implications are discussed with reference to three central ideas on collective intelligence, these being Galton's conjecture on the distribution of judgments, Muth's rational expectations hypothesis, and Page's diversity prediction theorem.

  1. Diagnostic and therapeutic errors in trigeminal autonomic cephalalgias and hemicrania continua: a systematic review.

    Science.gov (United States)

    Viana, Michele; Tassorelli, Cristina; Allena, Marta; Nappi, Giuseppe; Sjaastad, Ottar; Antonaci, Fabio

    2013-02-18

    Trigeminal autonomic cephalalgias (TACs) and hemicrania continua (HC) are relatively rare but clinically rather well-defined primary headaches. Despite the existence of clear-cut diagnostic criteria (The International Classification of Headache Disorders, 2nd edition - ICHD-II) and several therapeutic guidelines, errors in workup and treatment of these conditions are frequent in clinical practice. We set out to review all available published data on mismanagement of TACs and HC patients in order to understand and avoid its causes. The search strategy identified 22 published studies. The most frequent errors described in the management of patients with TACs and HC are: referral to wrong type of specialist, diagnostic delay, misdiagnosis, and the use of treatments without overt indication. Migraine with and without aura, trigeminal neuralgia, sinus infection, dental pain and temporomandibular dysfunction are the disorders most frequently overdiagnosed. Even when the clinical picture is clear-cut, TACs and HC are frequently not recognized and/or mistaken for other disorders, not only by general physicians, dentists and ENT surgeons, but also by neurologists and headache specialists. This seems to be due to limited knowledge of the specific characteristics and variants of these disorders, and it results in the unnecessary prescription of ineffective and sometimes invasive treatments which may have negative consequences for patients. Greater knowledge of and education about these disorders, among both primary care physicians and headache specialists, might contribute to improving the quality of life of TACs and HC patients.

  2. Sensitivity analysis of crustal correction and its error propagation to upper mantle residual gravity and density anomalies

    DEFF Research Database (Denmark)

    Herceg, Matija; Artemieva, Irina; Thybo, Hans

    2013-01-01

    We investigate the effect of the crustal structure heterogeneity and uncertainty in its determination on stripped gravity field. The analysis is based on interpretation of residual upper mantle gravity anomalies which are calculated by subtracting (stripping) the gravitational effect of the crust...... a relatively small range of expected density variations in the lithospheric mantle, knowledge on the uncertainties associated with incomplete knowledge of density structure of the crust is of utmost importance for further progress in such studies......) uncertainties in the velocity-density conversion and (ii) uncertainties in knowledge of the crustal structure (thickness and average Vp velocities of individual crustal layers, including the sedimentary cover). In this study, we address both sources of possible uncertainties by applying different conversions...... from velocity to density and by introducing variations into the crustal structure which corresponds to the uncertainty of its resolution by high-quality and low-quality seismic models. We examine the propagation of these uncertainties into determinations of lithospheric mantle density. The residual...

  3. The Curious Anomaly of Skewed Judgment Distributions and Systematic Error in the Wisdom of Crowds

    DEFF Research Database (Denmark)

    Nash, Ulrik William

    2014-01-01

    Judgment distributions are often skewed and we know little about why. This paper explains the phenomenon of skewed judgment distributions by introducing the augmented quincunx (AQ) model of sequential and probabilistic cue categorization by neurons of judges. In the process of developing inferences...... about true values, when neurons categorize cues better than chance, and when the particular true value is extreme compared to what is typical and anchored upon, then populations of judges form skewed judgment distributions with high probability. Moreover, the collective error made by these people can...... be inferred from how skewed their judgment distributions are, and in what direction they tilt. This implies not just that judgment distributions are shaped by cues, but that judgment distributions are cues themselves for the wisdom of crowds. The AQ model also predicts that judgment variance correlates...

  4. Evidence of systematic errors in SCIAMACHY-observed CO2 due to aerosols

    Directory of Open Access Journals (Sweden)

    S. Houweling

    2005-01-01

    Full Text Available SCIAMACHY CO2 measurements show a large variability in total column CO2 over the Sahara desert of up to 10%, which is not anticipated from in situ measurements and cannot be explained by results of atmospheric models. Comparisons with colocated aerosol measurements by TOMS and MISR over the Sahara indicate that the seasonal variation of SCIAMACHY-observed CO2 strongly resembles seasonal variations of windblown dust. Correlation coefficients of monthly datasets of colocated MISR aerosol optical depth and SCIAMACHY CO2 vary between 0.6 and 0.8, indicating that about half of the CO2 variance is explained by aerosol optical depth. Radiative transfer model calculations confirm the role of dust and can explain the size of the errors. Sensitivity tests suggest that the remaining variance may largely be explained by variations in the vertical distribution of dust. Further calculations for a few typical aerosol classes and a broad range of atmospheric conditions show that the impact of aerosols on SCIAMACHY retrieved CO2 is by far the largest over the Sahara, but may also reach significant levels elsewhere. Over the continents, aerosols lead mostly to overestimated CO2 columns with the exception of biomass burning plumes and dark coniferous forests. Inverse modelling calculations confirm that aerosol correction of SCIAMACHY CO2 measurements is needed to derive meaningful source and sink estimates. Methods for correcting aerosol-induced errors exist, but so far mainly on the basis of theoretical considerations. As demonstrated by this study, SCIAMACHY may contribute to a verification of such methods using real data.

  5. Memory effects and systematic errors in the RL signal from fiber coupled Al2O3:C for medical dosimetry

    DEFF Research Database (Denmark)

    Damkjær, Sidsel Marie Skov; Andersen, Claus Erik

    2010-01-01

    This review describes 40 years of experience gained at Risø The radioluminescence (RL) signal from fiber coupled Al2O3:C can be used for real-time in vivo dosimetry during radiotherapy. RL generally provides measurements with a reproducibility of 2% (one standard deviation). However, we have...... observed a non-random variability of the RL signal which means that the memory of the system is not fully reset by the optically stimulated luminescence (OSL) readout. Here we report an example of how this memory affects the RL signal. Measurements were performed in the range of 0–4 Gy using four Al2O3:C...... crystals, in cycles of irradiation and subsequent readout. We found the memory to be persistent, influencing several successive RL measurements. The induced systematic error was found to be crystal dependent, but proportional to the measurement-to-measurement dose variation (approximately 1.4% per Gy)....

  6. A review of sources of systematic errors and uncertainties in observations and simulations at 183 GHz

    Science.gov (United States)

    Brogniez, Helene; English, Stephen; Mahfouf, Jean-Francois; Behrendt, Andreas; Berg, Wesley; Boukabara, Sid; Buehler, Stefan Alexander; Chambon, Philippe; Gambacorta, Antonia; Geer, Alan; Ingram, William; Kursinski, E. Robert; Matricardi, Marco; Odintsova, Tatyana A.; Payne, Vivienne H.; Thorne, Peter W.; Tretyakov, Mikhail Yu.; Wang, Junhong

    2016-05-01

    Several recent studies have observed systematic differences between measurements in the 183.31 GHz water vapor line by space-borne sounders and calculations using radiative transfer models, with inputs from either radiosondes (radiosonde observations, RAOBs) or short-range forecasts by numerical weather prediction (NWP) models. This paper discusses all the relevant categories of observation-based or model-based data, quantifies their uncertainties and separates biases that could be common to all causes from those attributable to a particular cause. Reference observations from radiosondes, Global Navigation Satellite System (GNSS) receivers, differential absorption lidar (DIAL) and Raman lidar are thus overviewed. Biases arising from their calibration procedures, NWP models and data assimilation, instrument biases and radiative transfer models (both the models themselves and the underlying spectroscopy) are presented and discussed. Although presently no single process in the comparisons seems capable of explaining the observed structure of bias, recommendations are made in order to better understand the causes.

  7. A Novel, Physics-Based Data Analytics Framework for Reducing Systematic Model Errors

    Science.gov (United States)

    Wu, W.; Liu, Y.; Vandenberghe, F. C.; Knievel, J. C.; Hacker, J.

    2015-12-01

    Most climate and weather models exhibit systematic biases, such as under predicted diurnal temperatures in the WRF (Weather Research and Forecasting) model. General approaches to alleviate the systematic biases include improving model physics and numerics, improving data assimilation, and bias correction through post-processing. In this study, we developed a novel, physics-based data analytics framework in post processing by taking advantage of ever-growing high-resolution (spatial and temporal) observational and modeling data. In the framework, a spatiotemporal PCA (Principal Component Analysis) is first applied on the observational data to filter out noise and information on scales that a model may not be able to resolve. The filtered observations are then used to establish regression relationships with archived model forecasts in the same spatiotemporal domain. The regressions along with the model forecasts predict the projected observations in the forecasting period. The pre-regression PCA procedure strengthens regressions, and enhances predictive skills. We then combine the projected observations with the past observations to apply PCA iteratively to derive the final forecasts. This post-regression PCA reconstructs variances and scales of information that are lost in the regression. The framework was examined and validated with 24 days of 5-minute observational data and archives from the WRF model at 27 stations near Dugway Proving Ground, Utah. The validation shows significant bias reduction in the diurnal cycle of predicted surface air temperature compared to the direct output from the WRF model. Additionally, unlike other post-processing bias correction schemes, the data analytics framework does not require long-term historic data and model archives. A week or two of the data is enough to take into account changes in weather regimes. The program, written in python, is also computationally efficient.

  8. Avoiding Systematic Errors in Isometric Squat-Related Studies without Pre-Familiarization by Using Sufficient Numbers of Trials

    Directory of Open Access Journals (Sweden)

    Pekünlü Ekim

    2014-10-01

    Full Text Available There is no scientific evidence in the literature indicating that maximal isometric strength measures can be assessed within 3 trials. We questioned whether the results of isometric squat-related studies in which maximal isometric squat strength (MISS testing was performed using limited numbers of trials without pre-familiarization might have included systematic errors, especially those resulting from acute learning effects. Forty resistance-trained male participants performed 8 isometric squat trials without pre-familiarization. The highest measures in the first “n” trials (3 ≤ n ≤ 8 of these 8 squats were regarded as MISS obtained using 6 different MISS test methods featuring different numbers of trials (The Best of n Trials Method [BnT]. When B3T and B8T were paired with other methods, high reliability was found between the paired methods in terms of intraclass correlation coefficients (0.93-0.98 and coefficients of variation (3.4-7.0%. The Wilcoxon’s signed rank test indicated that MISS obtained using B3T and B8T were lower (p < 0.001 and higher (p < 0.001, respectively, than those obtained using other methods. The Bland- Altman method revealed a lack of agreement between any of the paired methods. Simulation studies illustrated that increasing the number of trials to 9-10 using a relatively large sample size (i.e., ≥ 24 could be an effective means of obtaining the actual MISS values of the participants. The common use of a limited number of trials in MISS tests without pre-familiarization appears to have no solid scientific base. Our findings suggest that the number of trials should be increased in commonly used MISS tests to avoid learning effect-related systematic errors

  9. The required number of treatment imaging days for an effective off-line correction of systematic errors in conformal radiotherapy of prostate cancer -- a radiobiological analysis

    International Nuclear Information System (INIS)

    Amer, Ali M.; Mackay, Ranald I.; Roberts, Stephen A.; Hendry, Jolyon H.; Williams, Peter C.

    2001-01-01

    Background and purpose: To use radiobiological modelling to estimate the number of initial days of treatment imaging required to gain most of the benefit from off-line correction of systematic errors in the conformal radiation therapy of prostate cancer. Materials and methods: Treatment plans based on the anatomical information of a representative patient were generated assuming that the patient is treated with a multi leaf collimator (MLC) four-field technique and a total isocentre dose of 72 Gy delivered in 36 daily fractions. Target position variations between fractions were simulated from standard deviations of measured data found in the literature. Off-line correction of systematic errors was assumed to be performed only once based on the measured errors during the initial days of treatment. The tumour control probability (TCP) was calculated using the Webb and Nahum model. Results: Simulation of daily variations in the target position predicted a marked reduction in TCP if the planning target volume (PTV) margin was smaller than 4 mm (TCP decreased by 3.4% for 2 mm margin). The systematic components of target position variations had greater effect on the TCP than the random components. Off-line correction of estimated systematic errors reduced the decrease in TCP due to target daily displacements, nevertheless, the resulting TCP levels for small margins were still less than the TCP level obtained with the use of an adequate PTV margin of ∼10 mm. The magnitude of gain in TCP expected from the correction depended on the number of treatment imaging days used for the correction and the PTV margin applied. Gains of 2.5% in TCP were estimated from correction of systematic errors performed after 6 initial days of treatment imaging for a 2 mm PTV margin. The effect of various possible magnitudes of systematic and random components on the gain in TCP expected from correction and on the number of imaging days required was also investigated. Conclusions: Daily

  10. The Application of Coherent Local Time for Optical Time Transfer and the Quantification of Systematic Errors in Satellite Laser Ranging

    Science.gov (United States)

    Schreiber, K. Ulrich; Kodet, Jan

    2018-02-01

    Highly precise time and stable reference frequencies are fundamental requirements for space geodesy. Satellite laser ranging (SLR) is one of these techniques, which differs from all other applications like Very Long Baseline Interferometry (VLBI), Global Navigation Satellite Systems (GNSS) and finally Doppler Orbitography and Radiopositioning Integrated by Satellite (DORIS) by the fact that it is an optical two-way measurement technique. That means that there is no need for a clock synchronization process between both ends of the distance covered by the measurement technique. Under the assumption of isotropy for the speed of light, SLR establishes the only practical realization of the Einstein Synchronization process so far. Therefore it is a powerful time transfer technique. However, in order to transfer time between two remote clocks, it is also necessary to tightly control all possible signal delays in the ranging process. This paper discusses the role of time and frequency in SLR as well as the error sources before it address the transfer of time between ground and space. The need of an improved signal delay control led to a major redesign of the local time and frequency distribution at the Geodetic Observatory Wettzell. Closure measurements can now be used to identify and remove systematic errors in SLR measurements.

  11. Constituent quarks and systematic errors in mid-rapidity charged multiplicity dNch/dη distributions

    Science.gov (United States)

    Tannenbaum, M. J.

    2018-01-01

    Centrality definition in A + A collisions at colliders such as RHIC and LHC suffers from a correlated systematic uncertainty caused by the efficiency of detecting a p + p collision (50 ± 5% for PHENIX at RHIC). In A + A collisions where centrality is measured by the number of nucleon collisions, Ncoll, or the number of nucleon participants, Npart, or the number of constituent quark participants, Nqp, the error in the efficiency of the primary interaction trigger (Beam-Beam Counters) for a p + p collision leads to a correlated systematic uncertainty in Npart, Ncoll or Nqp which reduces binomially as the A + A collisions become more central. If this is not correctly accounted for in projections of A + A to p + p collisions, then mistaken conclusions can result. A recent example is presented in whether the mid-rapidity charged multiplicity per constituent quark participant (dNch/dη)/Nqp in Au + Au at RHIC was the same as the value in p + p collisions.

  12. Measurement properties of visual rating of postural orientation errors of the lower extremity - A systematic review and meta-analysis.

    Science.gov (United States)

    Nae, Jenny; Creaby, Mark W; Cronström, Anna; Ageberg, Eva

    2017-09-01

    To systematically review measurement properties of visual assessment and rating of Postural Orientation Errors (POEs) in participants with or without lower extremity musculoskeletal disorders. A systematic review according to the PRISMA guidelines was conducted. The search was performed in Medline (Pubmed), CINAHL and EMBASE (OVID) databases until August 2016. Studies reporting measurement properties for visual rating of postural orientation during the performance of weight-bearing functional tasks were included. No limits were placed on participant age, sex or whether they had a musculoskeletal disorder affecting the lower extremity. Twenty-eight articles were included, 5 of which included populations with a musculoskeletal disorder. Visual rating of the knee-medial-to-foot position (KMFP) was reliable within and between raters, and meta-analyses showed that this POE was valid against 2D and 3D kinematics in asymptomatic populations. Other segment-specific POEs showed either poor to moderate reliability or there were too few studies to permit synthesis. Intra-rater reliability was at least moderate for POEs within a task whereas inter-rater reliability was at most moderate. Visual rating of KMFP appears to be valid and reliable in asymptomatic adult populations. Measurement properties remain to be determined for POEs other than KMPF. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Resilience to emotional distress in response to failure, error or mistakes: A systematic review.

    Science.gov (United States)

    Johnson, Judith; Panagioti, Maria; Bass, Jennifer; Ramsey, Lauren; Harrison, Reema

    2017-03-01

    Perceptions of failure have been implicated in a range of psychological disorders, and even a single experience of failure can heighten anxiety and depression. However, not all individuals experience significant emotional distress following failure, indicating the presence of resilience. The current systematic review synthesised studies investigating resilience factors to emotional distress resulting from the experience of failure. For the definition of resilience we used the Bi-Dimensional Framework for resilience research (BDF) which suggests that resilience factors are those which buffer the impact of risk factors, and outlines criteria a variable should meet in order to be considered as conferring resilience. Studies were identified through electronic searches of PsycINFO, MEDLINE, EMBASE and Web of Knowledge. Forty-six relevant studies reported in 38 papers met the inclusion criteria. These provided evidence of the presence of factors which confer resilience to emotional distress in response to failure. The strongest support was found for the factors of higher self-esteem, more positive attributional style, and lower socially-prescribed perfectionism. Weaker evidence was found for the factors of lower trait reappraisal, lower self-oriented perfectionism and higher emotional intelligence. The majority of studies used experimental or longitudinal designs. These results identify specific factors which should be targeted by resilience-building interventions. Resilience; failure; stress; self-esteem; attributional style; perfectionism. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. What Makes Hydrologic Models Differ? Using SUMMA to Systematically Explore Model Uncertainty and Error

    Science.gov (United States)

    Bennett, A.; Nijssen, B.; Chegwidden, O.; Wood, A.; Clark, M. P.

    2017-12-01

    Model intercomparison experiments have been conducted to quantify the variability introduced during the model development process, but have had limited success in identifying the sources of this model variability. The Structure for Unifying Multiple Modeling Alternatives (SUMMA) has been developed as a framework which defines a general set of conservation equations for mass and energy as well as a common core of numerical solvers along with the ability to set options for choosing between different spatial discretizations and flux parameterizations. SUMMA can be thought of as a framework for implementing meta-models which allows for the investigation of the impacts of decisions made during the model development process. Through this flexibility we develop a hierarchy of definitions which allows for models to be compared to one another. This vocabulary allows us to define the notion of weak equivalence between model instantiations. Through this weak equivalence we develop the concept of model mimicry, which can be used to investigate the introduction of uncertainty and error during the modeling process as well as provide a framework for identifying modeling decisions which may complement or negate one another. We instantiate SUMMA instances that mimic the behaviors of the Variable Infiltration Capacity (VIC) model and the Precipitation Runoff Modeling System (PRMS) by choosing modeling decisions which are implemented in each model. We compare runs from these models and their corresponding mimics across the Columbia River Basin located in the Pacific Northwest of the United States and Canada. From these comparisons, we are able to determine the extent to which model implementation has an effect on the results, as well as determine the changes in sensitivity of parameters due to these implementation differences. By examining these changes in results and sensitivities we can attempt to postulate changes in the modeling decisions which may provide better estimation of

  15. Systematic errors in respiratory gating due to intrafraction deformations of the liver

    International Nuclear Information System (INIS)

    Siebenthal, Martin von; Szekely, Gabor; Lomax, Antony J.; Cattin, Philippe C.

    2007-01-01

    This article shows the limitations of respiratory gating due to intrafraction deformations of the right liver lobe. The variability of organ shape and motion over tens of minutes was taken into account for this evaluation, which closes the gap between short-term analysis of a few regular cycles, as it is possible with 4DCT, and long-term analysis of interfraction motion. Time resolved MR volumes (4D MR sequences) were reconstructed for 12 volunteers and subsequent non-rigid registration provided estimates of the 3D trajectories of points within the liver over time. The full motion during free breathing and its distribution over the liver were quantified and respiratory gating was simulated to determine the gating accuracy for different gating signals, duty cycles, and different intervals between patient setup and treatment. Gating effectively compensated for the respiratory motion within short sequences (3 min), but deformations, mainly in the anterior inferior part (Couinaud segments IVb and V), led to systematic deviations from the setup position of more than 5 mm in 7 of 12 subjects after 20 min. We conclude that measurements over a few breathing cycles should not be used as a proof of accurate reproducibility of motion, not even within the same fraction, if it is longer than a few minutes. Although the diaphragm shows the largest magnitude of motion, it should not be used to assess the gating accuracy over the entire liver because the reproducibility is typically much more limited in inferior parts. Simple gating signals, such as the trajectory of skin motion, can detect the exhalation phase, but do not allow for an absolute localization of the complete liver over longer periods because the drift of these signals does not necessarily correlate with the internal drift

  16. Impact of interventions designed to reduce medication administration errors in hospitals: a systematic review.

    Science.gov (United States)

    Keers, Richard N; Williams, Steven D; Cooke, Jonathan; Walsh, Tanya; Ashcroft, Darren M

    2014-05-01

    There is a need to identify effective interventions to minimize the threat posed by medication administration errors (MAEs). Our objective was to review and critically appraise interventions designed to reduce MAEs in the hospital setting. Ten electronic databases were searched between 1985 and November 2013. Randomized controlled trials (RCTs) and controlled trials (CTs) reporting rates of MAEs or related adverse drug events between an intervention group and a comparator group were included. Data from each study were independently extracted and assessed for potential risk of bias by two authors. Risk ratios (RRs, with 95 % confidence intervals [CIs]) were used to examine the effect of an intervention. Six RCTs and seven CTs were included. Types of interventions clustered around four main themes: medication use technology (n = 4); nurse education and training (n = 3); changing practice in anesthesia (n = 2); and ward system changes (n = 4). Reductions in MAE rates were reported by five studies; these included automated drug dispensing (RR 0.72, 95 % CI 0.53-1.00), computerized physician order entry (RR 0.51, 95 % 0.40-0.66), barcode-assisted medication administration with electronic administration records (RR 0.71, 95 % CI 0.53-0.95), nursing education/training using simulation (RR 0.17, 95 % CI 0.08-0.38), and clinical pharmacist-led training (RR 0.76, 95 % CI 0.67-0.87). Increased or equivocal outcome rates were found for the remaining studies. Weaknesses in the internal or external validity were apparent for most included studies. Theses and conference proceedings were excluded and data produced outside commercial publishing were not searched. There is emerging evidence of the impact of specific interventions to reduce MAEs in hospitals, which warrant further investigation using rigorous and standardized study designs. Theory-driven efforts to understand the underlying causes of MAEs may lead to more effective interventions in the future.

  17. Systematic errors in measuring parameters of non-spinning compact binary coalescences with post-Newtonian templates

    International Nuclear Information System (INIS)

    Bose, Sukanta; Ghosh, Shaon; Ajith, P

    2010-01-01

    We study the astrophysical impact of inaccurate and incomplete modeling of the gravitational waveforms from compact binary coalescences (CBCs). We do so by the matched filtering of phenomenological inspiral-merger-ringdown (IMR) signals with a bank of inspiral-phase templates modeled on the 3.5 post-Newtonian TaylorT1 approximant. The rationale for the choice of the templates is threefold. (1) The inspiral phase of the phenomenological IMR signals, which are an example of complete IMR signals, is modeled on the same TaylorT1 approximant. (2) In the low-mass limit, where the merger and ringdown phases are much shorter than the inspiral phase, the errors should tend to vanishingly small values and, thus, provide an important check on the numerical aspects of our simulations. (3) Since the binary black hole signals are not yet known for mass ratios above ten and since signals from CBCs involving neutron stars are affected by uncertainties in the knowledge of their equation of state, inspiral templates are still in use in searches for those signals. The results from our numerical simulations are compared with analytical calculations of the systematic errors using the Fisher matrix on the template parameter space. We find that the loss in signal-to-noise ratio (SNR) can be as large as 45% even for binary black holes with component masses m 1 = 10 M o-dot and m 2 = 40 M o-dot . Also the estimated total mass for the same pair can be off by as much as 20%. Both of these are worse for some higher mass combinations. Even the estimation of the symmetric mass ratio η suffers a nearly 20% error for this example and can be worse than 50% for the mass ranges studied here. These errors significantly dominate their statistical counterparts (at a nominal SNR of 10). It may, however, be possible to mitigate the loss in SNR by allowing for templates with unphysical values of η.

  18. The I-PASS mnemonic and the occurrence of handoff related errors in adult acute care hospitals: a systematic review protocol.

    Science.gov (United States)

    Ransom, Brittany; Winters, Karen

    2018-01-01

    What is the effectiveness of the I-PASS mnemonic in reducing handoff related errors during inter- or intrahospital transfers for hospitalized patients?The objective of this systematic review is to identify the impact of the I-PASS mnemonic during hospitalized patient inter- or intrahospital transfers on medication errors, transfer delays, treatment delays and mortality.More specifically, the objective is to identify the effect that the I-PASS mnemonic has on handoff related errors during inter or intrahospital patient transfers by comparing rates pre and post I-PASS implementation.

  19. Dosimetric effect of intrafraction motion and residual setup error for hypofractionated prostate intensity-modulated radiotherapy with online cone beam computed tomography image guidance.

    LENUS (Irish Health Repository)

    Adamson, Justus

    2012-02-01

    PURPOSE: To quantify the dosimetric effect and margins required to account for prostate intrafractional translation and residual setup error in a cone beam computed tomography (CBCT)-guided hypofractionated radiotherapy protocol. METHODS AND MATERIALS: Prostate position after online correction was measured during dose delivery using simultaneous kV fluoroscopy and posttreatment CBCT in 572 fractions to 30 patients. We reconstructed the dose distribution to the clinical tumor volume (CTV) using a convolution of the static dose with a probability density function (PDF) based on the kV fluoroscopy, and we calculated the minimum dose received by 99% of the CTV (D(99)). We compared reconstructed doses when the convolution was performed per beam, per patient, and when the PDF was created using posttreatment CBCT. We determined the minimum axis-specific margins to limit CTV D(99) reduction to 1%. RESULTS: For 3-mm margins, D(99) reduction was <\\/=5% for 29\\/30 patients. Using post-CBCT rather than localizations at treatment delivery exaggerated dosimetric effects by ~47%, while there was no such bias between the dose convolved with a beam-specific and patient-specific PDF. After eight fractions, final cumulative D(99) could be predicted with a root mean square error of <1%. For 90% of patients, the required margins were <\\/=2, 4, and 3 mm, with 70%, 40%, and 33% of patients requiring no right-left (RL), anteroposterior (AP), and superoinferior margins, respectively. CONCLUSIONS: For protocols with CBCT guidance, RL, AP, and SI margins of 2, 4, and 3 mm are sufficient to account for translational errors; however, the large variation in patient-specific margins suggests that adaptive management may be beneficial.

  20. Bioelectrical impedance analysis to estimate body composition in children and adolescents: a systematic review and evidence appraisal of validity, responsiveness, reliability and measurement error

    NARCIS (Netherlands)

    Talma, H.; Chinapaw, M.J.M.; Bakker, B.; Hirasing, R.A.; Terwee, C.B.; Altenburg, T.M.

    2013-01-01

    Bioelectrical impedance analysis (BIA) is a practical method to estimate percentage body fat (%BF). In this systematic review, we aimed to assess validity, responsiveness, reliability and measurement error of BIA methods in estimating %BF in children and adolescents.We searched for relevant studies

  1. Random and systematic errors in case–control studies calculating the injury risk of driving under the influence of psychoactive substances

    DEFF Research Database (Denmark)

    Houwing, Sjoerd; Hagenzieker, Marjan; Mathijssen, René P.M.

    2013-01-01

    injury in car crashes. The calculated odds ratios in these studies showed large variations, despite the use of uniform guidelines for the study designs. The main objective of the present article is to provide insight into the presence of random and systematic errors in the six DRUID case-control studies...

  2. Evaluating IMRT and VMAT dose accuracy: Practical examples of failure to detect systematic errors when applying a commonly used metric and action levels

    Energy Technology Data Exchange (ETDEWEB)

    Nelms, Benjamin E. [Canis Lupus LLC, Merrimac, Wisconsin 53561 (United States); Chan, Maria F. [Memorial Sloan-Kettering Cancer Center, Basking Ridge, New Jersey 07920 (United States); Jarry, Geneviève; Lemire, Matthieu [Hôpital Maisonneuve-Rosemont, Montréal, QC H1T 2M4 (Canada); Lowden, John [Indiana University Health - Goshen Hospital, Goshen, Indiana 46526 (United States); Hampton, Carnell [Levine Cancer Institute/Carolinas Medical Center, Concord, North Carolina 28025 (United States); Feygelman, Vladimir [Moffitt Cancer Center, Tampa, Florida 33612 (United States)

    2013-11-15

    Purpose: This study (1) examines a variety of real-world cases where systematic errors were not detected by widely accepted methods for IMRT/VMAT dosimetric accuracy evaluation, and (2) drills-down to identify failure modes and their corresponding means for detection, diagnosis, and mitigation. The primary goal of detailing these case studies is to explore different, more sensitive methods and metrics that could be used more effectively for evaluating accuracy of dose algorithms, delivery systems, and QA devices.Methods: The authors present seven real-world case studies representing a variety of combinations of the treatment planning system (TPS), linac, delivery modality, and systematic error type. These case studies are typical to what might be used as part of an IMRT or VMAT commissioning test suite, varying in complexity. Each case study is analyzed according to TG-119 instructions for gamma passing rates and action levels for per-beam and/or composite plan dosimetric QA. Then, each case study is analyzed in-depth with advanced diagnostic methods (dose profile examination, EPID-based measurements, dose difference pattern analysis, 3D measurement-guided dose reconstruction, and dose grid inspection) and more sensitive metrics (2% local normalization/2 mm DTA and estimated DVH comparisons).Results: For these case studies, the conventional 3%/3 mm gamma passing rates exceeded 99% for IMRT per-beam analyses and ranged from 93.9% to 100% for composite plan dose analysis, well above the TG-119 action levels of 90% and 88%, respectively. However, all cases had systematic errors that were detected only by using advanced diagnostic techniques and more sensitive metrics. The systematic errors caused variable but noteworthy impact, including estimated target dose coverage loss of up to 5.5% and local dose deviations up to 31.5%. Types of errors included TPS model settings, algorithm limitations, and modeling and alignment of QA phantoms in the TPS. Most of the errors were

  3. A correction method for systematic error in (1)H-NMR time-course data validated through stochastic cell culture simulation.

    Science.gov (United States)

    Sokolenko, Stanislav; Aucoin, Marc G

    2015-09-04

    The growing ubiquity of metabolomic techniques has facilitated high frequency time-course data collection for an increasing number of applications. While the concentration trends of individual metabolites can be modeled with common curve fitting techniques, a more accurate representation of the data needs to consider effects that act on more than one metabolite in a given sample. To this end, we present a simple algorithm that uses nonparametric smoothing carried out on all observed metabolites at once to identify and correct systematic error from dilution effects. In addition, we develop a simulation of metabolite concentration time-course trends to supplement available data and explore algorithm performance. Although we focus on nuclear magnetic resonance (NMR) analysis in the context of cell culture, a number of possible extensions are discussed. Realistic metabolic data was successfully simulated using a 4-step process. Starting with a set of metabolite concentration time-courses from a metabolomic experiment, each time-course was classified as either increasing, decreasing, concave, or approximately constant. Trend shapes were simulated from generic functions corresponding to each classification. The resulting shapes were then scaled to simulated compound concentrations. Finally, the scaled trends were perturbed using a combination of random and systematic errors. To detect systematic errors, a nonparametric fit was applied to each trend and percent deviations calculated at every timepoint. Systematic errors could be identified at time-points where the median percent deviation exceeded a threshold value, determined by the choice of smoothing model and the number of observed trends. Regardless of model, increasing the number of observations over a time-course resulted in more accurate error estimates, although the improvement was not particularly large between 10 and 20 samples per trend. The presented algorithm was able to identify systematic errors as small

  4. Adaptation to random and systematic errors: Comparison of amputee and non-amputee control interfaces with varying levels of process noise

    Science.gov (United States)

    Kording, Konrad P.; Hargrove, Levi J.; Sensinger, Jonathon W.

    2017-01-01

    The objective of this study was to understand how people adapt to errors when using a myoelectric control interface. We compared adaptation across 1) non-amputee subjects using joint angle, joint torque, and myoelectric control interfaces, and 2) amputee subjects using myoelectric control interfaces with residual and intact limbs (five total control interface conditions). We measured trial-by-trial adaptation to self-generated errors and random perturbations during a virtual, single degree-of-freedom task with two levels of feedback uncertainty, and evaluated adaptation by fitting a hierarchical Kalman filter model. We have two main results. First, adaptation to random perturbations was similar across all control interfaces, whereas adaptation to self-generated errors differed. These patterns matched predictions of our model, which was fit to each control interface by changing the process noise parameter that represented system variability. Second, in amputee subjects, we found similar adaptation rates and error levels between residual and intact limbs. These results link prosthesis control to broader areas of motor learning and adaptation and provide a useful model of adaptation with myoelectric control. The model of adaptation will help us understand and solve prosthesis control challenges, such as providing additional sensory feedback. PMID:28301512

  5. Adaptation to random and systematic errors: Comparison of amputee and non-amputee control interfaces with varying levels of process noise.

    Directory of Open Access Journals (Sweden)

    Reva E Johnson

    Full Text Available The objective of this study was to understand how people adapt to errors when using a myoelectric control interface. We compared adaptation across 1 non-amputee subjects using joint angle, joint torque, and myoelectric control interfaces, and 2 amputee subjects using myoelectric control interfaces with residual and intact limbs (five total control interface conditions. We measured trial-by-trial adaptation to self-generated errors and random perturbations during a virtual, single degree-of-freedom task with two levels of feedback uncertainty, and evaluated adaptation by fitting a hierarchical Kalman filter model. We have two main results. First, adaptation to random perturbations was similar across all control interfaces, whereas adaptation to self-generated errors differed. These patterns matched predictions of our model, which was fit to each control interface by changing the process noise parameter that represented system variability. Second, in amputee subjects, we found similar adaptation rates and error levels between residual and intact limbs. These results link prosthesis control to broader areas of motor learning and adaptation and provide a useful model of adaptation with myoelectric control. The model of adaptation will help us understand and solve prosthesis control challenges, such as providing additional sensory feedback.

  6. Adaptation to random and systematic errors: Comparison of amputee and non-amputee control interfaces with varying levels of process noise.

    Science.gov (United States)

    Johnson, Reva E; Kording, Konrad P; Hargrove, Levi J; Sensinger, Jonathon W

    2017-01-01

    The objective of this study was to understand how people adapt to errors when using a myoelectric control interface. We compared adaptation across 1) non-amputee subjects using joint angle, joint torque, and myoelectric control interfaces, and 2) amputee subjects using myoelectric control interfaces with residual and intact limbs (five total control interface conditions). We measured trial-by-trial adaptation to self-generated errors and random perturbations during a virtual, single degree-of-freedom task with two levels of feedback uncertainty, and evaluated adaptation by fitting a hierarchical Kalman filter model. We have two main results. First, adaptation to random perturbations was similar across all control interfaces, whereas adaptation to self-generated errors differed. These patterns matched predictions of our model, which was fit to each control interface by changing the process noise parameter that represented system variability. Second, in amputee subjects, we found similar adaptation rates and error levels between residual and intact limbs. These results link prosthesis control to broader areas of motor learning and adaptation and provide a useful model of adaptation with myoelectric control. The model of adaptation will help us understand and solve prosthesis control challenges, such as providing additional sensory feedback.

  7. Statistical and systematic errors in the measurement of weak-lensing Minkowski functionals: Application to the Canada-France-Hawaii Lensing Survey

    Energy Technology Data Exchange (ETDEWEB)

    Shirasaki, Masato; Yoshida, Naoki, E-mail: masato.shirasaki@utap.phys.s.u-tokyo.ac.jp [Department of Physics, University of Tokyo, Tokyo 113-0033 (Japan)

    2014-05-01

    The measurement of cosmic shear using weak gravitational lensing is a challenging task that involves a number of complicated procedures. We study in detail the systematic errors in the measurement of weak-lensing Minkowski Functionals (MFs). Specifically, we focus on systematics associated with galaxy shape measurements, photometric redshift errors, and shear calibration correction. We first generate mock weak-lensing catalogs that directly incorporate the actual observational characteristics of the Canada-France-Hawaii Lensing Survey (CFHTLenS). We then perform a Fisher analysis using the large set of mock catalogs for various cosmological models. We find that the statistical error associated with the observational effects degrades the cosmological parameter constraints by a factor of a few. The Subaru Hyper Suprime-Cam (HSC) survey with a sky coverage of ∼1400 deg{sup 2} will constrain the dark energy equation of the state parameter with an error of Δw {sub 0} ∼ 0.25 by the lensing MFs alone, but biases induced by the systematics can be comparable to the 1σ error. We conclude that the lensing MFs are powerful statistics beyond the two-point statistics only if well-calibrated measurement of both the redshifts and the shapes of source galaxies is performed. Finally, we analyze the CFHTLenS data to explore the ability of the MFs to break degeneracies between a few cosmological parameters. Using a combined analysis of the MFs and the shear correlation function, we derive the matter density Ω{sub m0}=0.256±{sub 0.046}{sup 0.054}.

  8. Effects of Systematic and Random Errors on the Retrieval of Particle Microphysical Properties from Multiwavelength Lidar Measurements Using Inversion with Regularization

    Science.gov (United States)

    Ramirez, Daniel Perez; Whiteman, David N.; Veselovskii, Igor; Kolgotin, Alexei; Korenskiy, Michael; Alados-Arboledas, Lucas

    2013-01-01

    In this work we study the effects of systematic and random errors on the inversion of multiwavelength (MW) lidar data using the well-known regularization technique to obtain vertically resolved aerosol microphysical properties. The software implementation used here was developed at the Physics Instrumentation Center (PIC) in Troitsk (Russia) in conjunction with the NASA/Goddard Space Flight Center. Its applicability to Raman lidar systems based on backscattering measurements at three wavelengths (355, 532 and 1064 nm) and extinction measurements at two wavelengths (355 and 532 nm) has been demonstrated widely. The systematic error sensitivity is quantified by first determining the retrieved parameters for a given set of optical input data consistent with three different sets of aerosol physical parameters. Then each optical input is perturbed by varying amounts and the inversion is repeated. Using bimodal aerosol size distributions, we find a generally linear dependence of the retrieved errors in the microphysical properties on the induced systematic errors in the optical data. For the retrievals of effective radius, number/surface/volume concentrations and fine-mode radius and volume, we find that these results are not significantly affected by the range of the constraints used in inversions. But significant sensitivity was found to the allowed range of the imaginary part of the particle refractive index. Our results also indicate that there exists an additive property for the deviations induced by the biases present in the individual optical data. This property permits the results here to be used to predict deviations in retrieved parameters when multiple input optical data are biased simultaneously as well as to study the influence of random errors on the retrievals. The above results are applied to questions regarding lidar design, in particular for the spaceborne multiwavelength lidar under consideration for the upcoming ACE mission.

  9. Impact of a quasi-stochastic cellular automaton backscatter scheme on the systematic error and seasonal prediction skill of a global climate model.

    Science.gov (United States)

    Berner, J; Doblas-Reyes, F J; Palmer, T N; Shutts, G; Weisheimer, A

    2008-07-28

    The impact of a nonlinear dynamic cellular automaton (CA) model, as a representation of the partially stochastic aspects of unresolved scales in global climate models, is studied in the European Centre for Medium Range Weather Forecasts coupled ocean-atmosphere model. Two separate aspects are discussed: impact on the systematic error of the model, and impact on the skill of seasonal forecasts. Significant reductions of systematic error are found both in the tropics and in the extratropics. Such reductions can be understood in terms of the inherently nonlinear nature of climate, in particular how energy injected by the CA at the near-grid scale can backscatter nonlinearly to larger scales. In addition, significant improvements in the probabilistic skill of seasonal forecasts are found in terms of a number of different variables such as temperature, precipitation and sea-level pressure. Such increases in skill can be understood both in terms of the reduction of systematic error as mentioned above, and in terms of the impact on ensemble spread of the CA's representation of inherent model uncertainty.

  10. Impact of inter- and intrafraction deviations and residual set-up errors on PTV margins. Different alignment techniques in 3D conformal prostate cancer radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Langsenlehner, T.; Doeller, C.; Winkler, P.; Kapp, K.S. [Graz Medical Univ. (Austria). Dept. of Therapeutic Radiology and Oncology; Galle, G. [Graz Medical Univ. (Austria). Dept. of Urology

    2013-04-15

    The aim of this work was to analyze interfraction and intrafraction deviations and residual set-up errors (RSE) after online repositioning to determine PTV margins for 3 different alignment techniques in prostate cancer radiotherapy. The present prospective study included 44 prostate cancer patients with implanted fiducials treated with three-dimensional (3D) conformal radiotherapy. Daily localization was based on skin marks followed by marker detection using kilovoltage (kV) imaging and subsequent patient repositioning. Additionally, in-treatment megavoltage (MV) images were obtained for each treatment field. In an off-line analysis of 7,273 images, interfraction prostate motion, RSE after marker-based prostate localization, prostate position during each treatment session, and the effect of treatment time on intrafraction deviations were analyzed to evaluate PTV margins. Margins accounting for interfraction deviation, RSE and intrafraction motion were 14.1, 12.9, and 15.1 mm in anterior-posterior (AP), superior-inferior (SI), and left-right (LR) direction for skin mark alignment and 9.6, 8.7, and 2.6 mm for bony structure alignment, respectively. Alignment to implanted markers required margins of 4.6, 2.8, and 2.5 mm. As margins to account for intrafraction motion increased with treatment prolongation PTV margins could be reduced to 3.9, 2.6, and 2.4 mm if treatment time was {<=} 4 min. With daily online correction and repositioning based on implanted fiducials, a significant reduction of PTV margins can be achieved. The use of an optimized workflow with faster treatment techniques such as volumetric modulated arc techniques (VMAT) could allow for a further decrease. (orig.)

  11. Impact of inter- and intrafraction deviations and residual set-up errors on PTV margins. Different alignment techniques in 3D conformal prostate cancer radiotherapy.

    Science.gov (United States)

    Langsenlehner, T; Döller, C; Winkler, P; Gallé, G; Kapp, K S

    2013-04-01

    The aim of this work was to analyze interfraction and intrafraction deviations and residual set-up errors (RSE) after online repositioning to determine PTV margins for 3 different alignment techniques in prostate cancer radiotherapy. The present prospective study included 44 prostate cancer patients with implanted fiducials treated with three-dimensional (3D) conformal radiotherapy. Daily localization was based on skin marks followed by marker detection using kilovoltage (kV) imaging and subsequent patient repositioning. Additionally, in-treatment megavoltage (MV) images were obtained for each treatment field. In an off-line analysis of 7,273 images, interfraction prostate motion, RSE after marker-based prostate localization, prostate position during each treatment session, and the effect of treatment time on intrafraction deviations were analyzed to evaluate PTV margins. Margins accounting for interfraction deviation, RSE and intrafraction motion were 14.1, 12.9, and 15.1 mm in anterior-posterior (AP), superior-inferior (SI), and left-right (LR) direction for skin mark alignment and 9.6, 8.7, and 2.6 mm for bony structure alignment, respectively. Alignment to implanted markers required margins of 4.6, 2.8, and 2.5 mm. As margins to account for intrafraction motion increased with treatment prolongation PTV margins could be reduced to 3.9, 2.6, and 2.4 mm if treatment time was ≤ 4 min. With daily online correction and repositioning based on implanted fiducials, a significant reduction of PTV margins can be achieved. The use of an optimized workflow with faster treatment techniques such as volumetric modulated arc techniques (VMAT) could allow for a further decrease.

  12. Impact of inter- and intrafraction deviations and residual set-up errors on PTV margins. Different alignment techniques in 3D conformal prostate cancer radiotherapy

    International Nuclear Information System (INIS)

    Langsenlehner, T.; Doeller, C.; Winkler, P.; Kapp, K.S.; Galle, G.

    2013-01-01

    The aim of this work was to analyze interfraction and intrafraction deviations and residual set-up errors (RSE) after online repositioning to determine PTV margins for 3 different alignment techniques in prostate cancer radiotherapy. The present prospective study included 44 prostate cancer patients with implanted fiducials treated with three-dimensional (3D) conformal radiotherapy. Daily localization was based on skin marks followed by marker detection using kilovoltage (kV) imaging and subsequent patient repositioning. Additionally, in-treatment megavoltage (MV) images were obtained for each treatment field. In an off-line analysis of 7,273 images, interfraction prostate motion, RSE after marker-based prostate localization, prostate position during each treatment session, and the effect of treatment time on intrafraction deviations were analyzed to evaluate PTV margins. Margins accounting for interfraction deviation, RSE and intrafraction motion were 14.1, 12.9, and 15.1 mm in anterior-posterior (AP), superior-inferior (SI), and left-right (LR) direction for skin mark alignment and 9.6, 8.7, and 2.6 mm for bony structure alignment, respectively. Alignment to implanted markers required margins of 4.6, 2.8, and 2.5 mm. As margins to account for intrafraction motion increased with treatment prolongation PTV margins could be reduced to 3.9, 2.6, and 2.4 mm if treatment time was ≤ 4 min. With daily online correction and repositioning based on implanted fiducials, a significant reduction of PTV margins can be achieved. The use of an optimized workflow with faster treatment techniques such as volumetric modulated arc techniques (VMAT) could allow for a further decrease. (orig.)

  13. Period prevalence and reporting rate of medication errors among nurses in Iran: A systematic review and meta-analysis.

    Science.gov (United States)

    Matin, Behzad Karami; Hajizadeh, Mohammad; Nouri, Bijan; Rezaeian, Shahab; Mohammadi, Masoud; Rezaei, Satar

    2018-01-22

    To estimate the 1-year period prevalence of medication errors and the reporting rate to nurse managers among nurses working in hospitals in Iran. Medication errors are one of the main factors affecting the quality of hospital services and reducing patient safety in health care systems. A literature search from Iranian and international scientific databases was developed to find relevant studies. Meta-regression was used to identify which characteristics may have a confounding effect on the pooled prevalence estimates. Based on the final 22 studies with 3556 samples, the overall estimated 1-year period prevalence of medication errors and its reporting rate to nurse managers among nurses were 53% (95% confidence interval, 41%-60%) and 36% (95% confidence interval, 23%-50%), respectively. The meta-regression analyses indicated that the sex (female/male) ratio was a statistically significant predictor of the prevalence of medication errors (p medication errors to nurse managers. The period prevalence of medication errors among nurses working in hospitals was high in Iran, whereas its reporting rate to nurse managers was low. Continuous training programmes are required to reduce and prevent medication errors among nursing staff and to improve the reporting rate to nurse managers in in Iran. © 2018 John Wiley & Sons Ltd.

  14. Systematic measurement errors involved in over-refraction using an autorefractor (Grand-Seiko WV-500): is measurement of accommodative lag through spectacle lenses valid?

    Science.gov (United States)

    Kimura, Shuhei; Hasebe, Satoshi; Ohtsuki, Hiroshi

    2007-05-01

    Lags of accommodation in ametropic children are often evaluated through spectacle lenses (over-refraction). This study investigated the validity of over-refraction when using an autorefractor. Using an autorefractor (Shin-Nippon SRW-500/Grand-Seiko WV-500), refractive readings were obtained in 25 cyclopleged eyes (mean +/- S.D. refraction: -3.44 +/- 3.56 D, range: from -10.56 to +0.25 D) while placing spherical lenses of different power (from -5.00 to +5.00 D) in front of the eye at a vertex distance of 12 mm. Based on the refractive readings with and without the lens, and the lens power, measurement errors were estimated. Similarly, the measurement errors were estimated also in model eyes of -10.00, -4.75, 0.00 and +10.00 D. The results were compared with ray-tracing simulations based on the internal specifications of the autorefractor. Measurement errors were found unless the power of the spectacle lens was equal to the refractive error of the eye. When the spectacle lens power was greater (less myopic or more hyperopic) than the refraction of the eye, the measurement error was negative in sign and greater than -0.3 D. It follows that, when an accommodative response is measured in myopic subjects, the refractive reading usually becomes more myopic than the refraction of the eye including the accommodative response; hence, the accommodative response is overestimated, and the lag of accommodation is underestimated. The autorefraction through spectacle lenses involved systematic measurement errors. The extent of the errors is usually small but needs to be taken into account in a comparative study of accommodative responses among different refractive groups.

  15. Preventing statistical errors in scientific journals.

    NARCIS (Netherlands)

    Nuijten, M.B.

    2016-01-01

    There is evidence for a high prevalence of statistical reporting errors in psychology and other scientific fields. These errors display a systematic preference for statistically significant results, distorting the scientific literature. There are several possible causes for this systematic error

  16. SU-D-BRA-03: Analysis of Systematic Errors with 2D/3D Image Registration for Target Localization and Treatment Delivery in Stereotactic Radiosurgery

    Energy Technology Data Exchange (ETDEWEB)

    Xu, H [Wayne State University, Detroit, MI (United States); Chetty, I; Wen, N [Henry Ford Health System, Detroit, MI (United States)

    2016-06-15

    Purpose: Determine systematic deviations between 2D/3D and 3D/3D image registrations with six degrees of freedom (6DOF) for various imaging modalities and registration algorithms on the Varian Edge Linac. Methods: The 6DOF systematic errors were assessed by comparing automated 2D/3D (kV/MV vs. CT) with 3D/3D (CBCT vs. CT) image registrations from different imaging pairs, CT slice thicknesses, couch angles, similarity measures, etc., using a Rando head and a pelvic phantom. The 2D/3D image registration accuracy was evaluated at different treatment sites (intra-cranial and extra-cranial) by statistically analyzing 2D/3D pre-treatment verification against 3D/3D localization of 192 Stereotactic Radiosurgery/Stereotactic Body Radiation Therapy treatment fractions for 88 patients. Results: The systematic errors of 2D/3D image registration using kV-kV, MV-kV and MV-MV image pairs using 0.8 mm slice thickness CT images were within 0.3 mm and 0.3° for translations and rotations with a 95% confidence interval (CI). No significant difference between 2D/3D and 3D/3D image registrations (P>0.05) was observed for target localization at various CT slice thicknesses ranging from 0.8 to 3 mm. Couch angles (30, 45, 60 degree) did not impact the accuracy of 2D/3D image registration. Using pattern intensity with content image filtering was recommended for 2D/3D image registration to achieve the best accuracy. For the patient study, translational error was within 2 mm and rotational error was within 0.6 degrees in terms of 95% CI for 2D/3D image registration. For intra-cranial sites, means and std. deviations of translational errors were −0.2±0.7, 0.04±0.5, 0.1±0.4 mm for LNG, LAT, VRT directions, respectively. For extra-cranial sites, means and std. deviations of translational errors were - 0.04±1, 0.2±1, 0.1±1 mm for LNG, LAT, VRT directions, respectively. 2D/3D image registration uncertainties for intra-cranial and extra-cranial sites were comparable. Conclusion: The Varian

  17. SU-D-BRA-03: Analysis of Systematic Errors with 2D/3D Image Registration for Target Localization and Treatment Delivery in Stereotactic Radiosurgery

    International Nuclear Information System (INIS)

    Xu, H; Chetty, I; Wen, N

    2016-01-01

    Purpose: Determine systematic deviations between 2D/3D and 3D/3D image registrations with six degrees of freedom (6DOF) for various imaging modalities and registration algorithms on the Varian Edge Linac. Methods: The 6DOF systematic errors were assessed by comparing automated 2D/3D (kV/MV vs. CT) with 3D/3D (CBCT vs. CT) image registrations from different imaging pairs, CT slice thicknesses, couch angles, similarity measures, etc., using a Rando head and a pelvic phantom. The 2D/3D image registration accuracy was evaluated at different treatment sites (intra-cranial and extra-cranial) by statistically analyzing 2D/3D pre-treatment verification against 3D/3D localization of 192 Stereotactic Radiosurgery/Stereotactic Body Radiation Therapy treatment fractions for 88 patients. Results: The systematic errors of 2D/3D image registration using kV-kV, MV-kV and MV-MV image pairs using 0.8 mm slice thickness CT images were within 0.3 mm and 0.3° for translations and rotations with a 95% confidence interval (CI). No significant difference between 2D/3D and 3D/3D image registrations (P>0.05) was observed for target localization at various CT slice thicknesses ranging from 0.8 to 3 mm. Couch angles (30, 45, 60 degree) did not impact the accuracy of 2D/3D image registration. Using pattern intensity with content image filtering was recommended for 2D/3D image registration to achieve the best accuracy. For the patient study, translational error was within 2 mm and rotational error was within 0.6 degrees in terms of 95% CI for 2D/3D image registration. For intra-cranial sites, means and std. deviations of translational errors were −0.2±0.7, 0.04±0.5, 0.1±0.4 mm for LNG, LAT, VRT directions, respectively. For extra-cranial sites, means and std. deviations of translational errors were - 0.04±1, 0.2±1, 0.1±1 mm for LNG, LAT, VRT directions, respectively. 2D/3D image registration uncertainties for intra-cranial and extra-cranial sites were comparable. Conclusion: The Varian

  18. A procedure for the significance testing of unmodeled errors in GNSS observations

    Science.gov (United States)

    Li, Bofeng; Zhang, Zhetao; Shen, Yunzhong; Yang, Ling

    2018-01-01

    It is a crucial task to establish a precise mathematical model for global navigation satellite system (GNSS) observations in precise positioning. Due to the spatiotemporal complexity of, and limited knowledge on, systematic errors in GNSS observations, some residual systematic errors would inevitably remain even after corrected with empirical model and parameterization. These residual systematic errors are referred to as unmodeled errors. However, most of the existing studies mainly focus on handling the systematic errors that can be properly modeled and then simply ignore the unmodeled errors that may actually exist. To further improve the accuracy and reliability of GNSS applications, such unmodeled errors must be handled especially when they are significant. Therefore, a very first question is how to statistically validate the significance of unmodeled errors. In this research, we will propose a procedure to examine the significance of these unmodeled errors by the combined use of the hypothesis tests. With this testing procedure, three components of unmodeled errors, i.e., the nonstationary signal, stationary signal and white noise, are identified. The procedure is tested by using simulated data and real BeiDou datasets with varying error sources. The results show that the unmodeled errors can be discriminated by our procedure with approximately 90% confidence. The efficiency of the proposed procedure is further reassured by applying the time-domain Allan variance analysis and frequency-domain fast Fourier transform. In summary, the spatiotemporally correlated unmodeled errors are commonly existent in GNSS observations and mainly governed by the residual atmospheric biases and multipath. Their patterns may also be impacted by the receiver.

  19. A Novel Strategy for Large-Scale Metabolomics Study by Calibrating Gross and Systematic Errors in Gas Chromatography-Mass Spectrometry.

    Science.gov (United States)

    Zhao, Yanni; Hao, Zhiqiang; Zhao, Chunxia; Zhao, Jieyu; Zhang, Junjie; Li, Yanli; Li, Lili; Huang, Xin; Lin, Xiaohui; Zeng, Zhongda; Lu, Xin; Xu, Guowang

    2016-02-16

    Metabolomics is increasingly applied to discover and validate metabolite biomarkers and illuminate biological variations. Combination of multiple analytical batches in large-scale and long-term metabolomics is commonly utilized to generate robust metabolomics data, but gross and systematic errors are often observed. The appropriate calibration methods are required before statistical analyses. Here, we develop a novel correction strategy for large-scale and long-term metabolomics study, which could integrate metabolomics data from multiple batches and different instruments by calibrating gross and systematic errors. The gross error calibration method applied various statistical and fitting models of the feature ratios between two adjacent quality control (QC) samples to screen and calibrate outlier variables. Virtual QC of each sample was produced by a linear fitting model of the feature intensities between two neighboring QCs to obtain a correction factor and remove the systematic bias. The suggested method was applied to handle metabolic profiling data of 1197 plant samples in nine batches analyzed by two gas chromatography-mass spectrometry instruments. The method was evaluated by the relative standard deviations of all the detected peaks, the average Pearson correlation coefficients, and Euclidean distance of QCs and non-QC replicates. The results showed the established approach outperforms the commonly used internal standard correction and total intensity signal correction methods, it could be used to integrate the metabolomics data from multiple analytical batches and instruments, and it allows the frequency of QC to one injection of every 20 real samples. The suggested method makes a large amount of metabolomics analysis practicable.

  20. A New Approach to Detection of Systematic Errors in Secondary Substation Monitoring Equipment Based on Short Term Load Forecasting

    Directory of Open Access Journals (Sweden)

    Javier Moriano

    2016-01-01

    Full Text Available In recent years, Secondary Substations (SSs are being provided with equipment that allows their full management. This is particularly useful not only for monitoring and planning purposes but also for detecting erroneous measurements, which could negatively affect the performance of the SS. On the other hand, load forecasting is extremely important since they help electricity companies to make crucial decisions regarding purchasing and generating electric power, load switching, and infrastructure development. In this regard, Short Term Load Forecasting (STLF allows the electric power load to be predicted over an interval ranging from one hour to one week. However, important issues concerning error detection by employing STLF has not been specifically addressed until now. This paper proposes a novel STLF-based approach to the detection of gain and offset errors introduced by the measurement equipment. The implemented system has been tested against real power load data provided by electricity suppliers. Different gain and offset error levels are successfully detected.

  1. A New Approach to Detection of Systematic Errors in Secondary Substation Monitoring Equipment Based on Short Term Load Forecasting.

    Science.gov (United States)

    Moriano, Javier; Rodríguez, Francisco Javier; Martín, Pedro; Jiménez, Jose Antonio; Vuksanovic, Branislav

    2016-01-12

    In recent years, Secondary Substations (SSs) are being provided with equipment that allows their full management. This is particularly useful not only for monitoring and planning purposes but also for detecting erroneous measurements, which could negatively affect the performance of the SS. On the other hand, load forecasting is extremely important since they help electricity companies to make crucial decisions regarding purchasing and generating electric power, load switching, and infrastructure development. In this regard, Short Term Load Forecasting (STLF) allows the electric power load to be predicted over an interval ranging from one hour to one week. However, important issues concerning error detection by employing STLF has not been specifically addressed until now. This paper proposes a novel STLF-based approach to the detection of gain and offset errors introduced by the measurement equipment. The implemented system has been tested against real power load data provided by electricity suppliers. Different gain and offset error levels are successfully detected.

  2. Systematic analysis of video data from different human–robot interaction studies: a categorization of social signals during error situations

    Science.gov (United States)

    Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred

    2015-01-01

    Human–robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human–robot interaction experiments. For that, we analyzed 201 videos of five human–robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human–robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies. PMID:26217266

  3. Systematic analysis of video data from different human-robot interaction studies: a categorization of social signals during error situations.

    Science.gov (United States)

    Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred

    2015-01-01

    Human-robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human-robot interaction experiments. For that, we analyzed 201 videos of five human-robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human-robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies.

  4. Inborn Errors of Metabolism That Cause Sudden Infant Death : A Systematic Review with Implications for Population Neonatal Screening Programmes

    NARCIS (Netherlands)

    van Rijt, Willemijn J.; Koolhaas, Geneviève D.; Bekhof, Jolita; Heiner Fokkema, M. Rebecca; de Koning, Tom J.; Visser, Gepke; Schielen, Peter C J I; van Spronsen, Francjan J.; Derks, Terry G J

    Background: Many inborn errors of metabolism (IEMs) may present as sudden infant death (SID). Nowadays, increasing numbers of patients with IEMs are identified pre-symptomatically by population neonatal bloodspot screening (NBS) programmes. However, some patients escape early detection because their

  5. Inborn Errors of Metabolism That Cause Sudden Infant Death : A Systematic Review with Implications for Population Neonatal Screening Programmes

    NARCIS (Netherlands)

    van Rijt, Willemijn J.; Koolhaas, Genevieve D.; Bekhof, Jolita; Fokkema, M. Rebecca Heiner; de Koning, Tom J.; Visser, Gepke; Schielen, Peter C. J. I.; Spronsen, van FrancJan; Derks, Terry G. J.

    2016-01-01

    BACKGROUND: Many inborn errors of metabolism (IEMs) may present as sudden infant death (SID). Nowadays, increasing numbers of patients with IEMs are identified pre-symptomatically by population neonatal bloodspot screening (NBS) programmes. However, some patients escape early detection because their

  6. Toward a Framework for Systematic Error Modeling of NASA Spaceborne Radar with NOAA/NSSL Ground Radar-Based National Mosaic QPE

    Science.gov (United States)

    Kirstettier, Pierre-Emmanual; Honh, Y.; Gourley, J. J.; Chen, S.; Flamig, Z.; Zhang, J.; Howard, K.; Schwaller, M.; Petersen, W.; Amitai, E.

    2011-01-01

    Characterization of the error associated to satellite rainfall estimates is a necessary component of deterministic and probabilistic frameworks involving space-born passive and active microwave measurement") for applications ranging from water budget studies to forecasting natural hazards related to extreme rainfall events. We focus here on the error structure of NASA's Tropical Rainfall Measurement Mission (TRMM) Precipitation Radar (PR) quantitative precipitation estimation (QPE) at ground. The problem is addressed by comparison of PR QPEs with reference values derived from ground-based measurements using NOAA/NSSL ground radar-based National Mosaic and QPE system (NMQ/Q2). A preliminary investigation of this subject has been carried out at the PR estimation scale (instantaneous and 5 km) using a three-month data sample in the southern part of US. The primary contribution of this study is the presentation of the detailed steps required to derive trustworthy reference rainfall dataset from Q2 at the PR pixel resolution. It relics on a bias correction and a radar quality index, both of which provide a basis to filter out the less trustworthy Q2 values. Several aspects of PR errors arc revealed and quantified including sensitivity to the processing steps with the reference rainfall, comparisons of rainfall detectability and rainfall rate distributions, spatial representativeness of error, and separation of systematic biases and random errors. The methodology and framework developed herein applies more generally to rainfall rate estimates from other sensors onboard low-earth orbiting satellites such as microwave imagers and dual-wavelength radars such as with the Global Precipitation Measurement (GPM) mission.

  7. Reduction of Systematic Errors in Diagnostic Receivers Through the Use of Balanced Dicke-Switching and Y-Factor Noise Calibrations

    Energy Technology Data Exchange (ETDEWEB)

    John Musson, Trent Allison, Roger Flood, Jianxun Yan

    2009-05-01

    Receivers designed for diagnostic applications range from those having moderate sensitivity to those possessing large dynamic range. Digital receivers have a dynamic range which are a function of the number of bits represented by the ADC and subsequent processing. If some of this range is sacrificed for extreme sensitivity, noise power can then be used to perform two-point load calibrations. Since load temperatures can be precisely determined, the receiver can be quickly and accurately characterized; minute changes in system gain can then be detected, and systematic errors corrected. In addition, using receiver pairs in a balanced approach to measuring X+, X-, Y+, Y-, reduces systematic offset errors from non-identical system gains, and changes in system performance. This paper describes and demonstrates a balanced BPM-style diagnostic receiver, employing Dicke-switching to establish and maintain real-time system calibration. Benefits of such a receiver include wide bandwidth, solid absolute accuracy, improved position accuracy, and phase-sensitive measurements. System description, static and dynamic modelling, and measurement data are presented.

  8. Slotted rotatable target assembly and systematic error analysis for a search for long range spin dependent interactions from exotic vector boson exchange using neutron spin rotation

    Science.gov (United States)

    Haddock, C.; Crawford, B.; Fox, W.; Francis, I.; Holley, A.; Magers, S.; Sarsour, M.; Snow, W. M.; Vanderwerp, J.

    2018-03-01

    We discuss the design and construction of a novel target array of nonmagnetic test masses used in a neutron polarimetry measurement made in search for new possible exotic spin dependent neutron-atominteractions of Nature at sub-mm length scales. This target was designed to accept and efficiently transmit a transversely polarized slow neutron beam through a series of long open parallel slots bounded by flat rectangular plates. These openings possessed equal atom density gradients normal to the slots from the flat test masses with dimensions optimized to achieve maximum sensitivity to an exotic spin-dependent interaction from vector boson exchanges with ranges in the mm - μm regime. The parallel slots were oriented differently in four quadrants that can be rotated about the neutron beam axis in discrete 90°increments using a Geneva drive. The spin rotation signals from the 4 quadrants were measured using a segmented neutron ion chamber to suppress possible systematic errors from stray magnetic fields in the target region. We discuss the per-neutron sensitivity of the target to the exotic interaction, the design constraints, the potential sources of systematic errors which could be present in this design, and our estimate of the achievable sensitivity using this method.

  9. Systematic errors in temperature estimates from MODIS data covering the western Palearctic and their impact on a parasite development model

    Directory of Open Access Journals (Sweden)

    Jorge Alonso-Carné

    2013-11-01

    Full Text Available The modelling of habitat suitability for parasites is a growing area of research due to its association with climate change and ensuing shifts in the distribution of infectious diseases. Such models depend on remote sensing data and require accurate, high-resolution temperature measurements. The temperature is critical for accurate estimation of development rates and potential habitat ranges for a given parasite. The MODIS sensors aboard the Aqua and Terra satellites provide high-resolution temperature data for remote sensing applications. This paper describes comparative analysis of MODISderived temperatures relative to ground records of surface temperature in the western Palaearctic. The results show that MODIS overestimated maximum temperature values and underestimated minimum temperatures by up to 5-6 ºC. The combined use of both Aqua and Terra datasets provided the most accurate temperature estimates around latitude 35-44º N, with an overestimation during spring-summer months and an underestimation in autumn-winter. Errors in temperature estimation were associated with specific ecological regions within the target area as well as technical limitations in the temporal and orbital coverage of the satellites (e.g. sensor limitations and satellite transit times. We estimated error propagation of temperature uncertainties in parasite habitat suitability models by comparing outcomes of published models. Error estimates reached 36% of annual respective measurements depending on the model used. Our analysis demonstrates the importance of adequate image processing and points out the limitations of MODIS temperature data as inputs into predictive models concerning parasite lifecycles.

  10. A Learning-Based Wrapper Method to Correct Systematic Errors in Automatic Image Segmentation: Consistently Improved Performance in Hippocampus, Cortex and Brain Segmentation

    Science.gov (United States)

    Wang, Hongzhi; Das, Sandhitsu R.; Suh, Jung Wook; Altinay, Murat; Pluta, John; Craige, Caryne; Avants, Brian; Yushkevich, Paul A.

    2011-01-01

    We propose a simple but generally applicable approach to improving the accuracy of automatic image segmentation algorithms relative to manual segmentations. The approach is based on the hypothesis that a large fraction of the errors produced by automatic segmentation are systematic, i.e., occur consistently from subject to subject, and serves as a wrapper method around a given host segmentation method. The wrapper method attempts to learn the intensity, spatial and contextual patterns associated with systematic segmentation errors produced by the host method on training data for which manual segmentations are available. The method then attempts to correct such errors in segmentations produced by the host method on new images. One practical use of the proposed wrapper method is to adapt existing segmentation tools, without explicit modification, to imaging data and segmentation protocols that are different from those on which the tools were trained and tuned. An open-source implementation of the proposed wrapper method is provided, and can be applied to a wide range of image segmentation problems. The wrapper method is evaluated with four host brain MRI segmentation methods: hippocampus segmentation using FreeSurfer (Fischl et al., 2002); hippocampus segmentation using multi-atlas label fusion (Artaechevarria et al., 2009); brain extraction using BET (Smith, 2002); and brain tissue segmentation using FAST (Zhang et al., 2001). The wrapper method generates 72%, 14%, 29% and 21% fewer erroneously segmented voxels than the respective host segmentation methods. In the hippocampus segmentation experiment with multi-atlas label fusion as the host method, the average Dice overlap between reference segmentations and segmentations produced by the wrapper method is 0.908 for normal controls and 0.893 for patients with mild cognitive impairment. Average Dice overlaps of 0.964, 0.905 and 0.951 are obtained for brain extraction, white matter segmentation and gray matter

  11. Towards a systematic assessment of errors in diffusion Monte Carlo calculations of semiconductors: Case study of zinc selenide and zinc oxide

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Jaehyung [Department of Mechanical Science and Engineering, 1206 W Green Street, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States); Wagner, Lucas K. [Department of Physics, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States); Ertekin, Elif, E-mail: ertekin@illinois.edu [Department of Mechanical Science and Engineering, 1206 W Green Street, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States); International Institute for Carbon Neutral Energy Research - WPI-I" 2CNER, Kyushu University, 744 Moto-oka, Nishi-ku, Fukuoka 819-0395 (Japan)

    2015-12-14

    The fixed node diffusion Monte Carlo (DMC) method has attracted interest in recent years as a way to calculate properties of solid materials with high accuracy. However, the framework for the calculation of properties such as total energies, atomization energies, and excited state energies is not yet fully established. Several outstanding questions remain as to the effect of pseudopotentials, the magnitude of the fixed node error, and the size of supercell finite size effects. Here, we consider in detail the semiconductors ZnSe and ZnO and carry out systematic studies to assess the magnitude of the energy differences arising from controlled and uncontrolled approximations in DMC. The former include time step errors and supercell finite size effects for ground and optically excited states, and the latter include pseudopotentials, the pseudopotential localization approximation, and the fixed node approximation. We find that for these compounds, the errors can be controlled to good precision using modern computational resources and that quantum Monte Carlo calculations using Dirac-Fock pseudopotentials can offer good estimates of both cohesive energy and the gap of these systems. We do however observe differences in calculated optical gaps that arise when different pseudopotentials are used.

  12. Does the GPM mission improve the systematic error component in satellite rainfall estimates over TRMM? An evaluation at a pan-India scale

    Science.gov (United States)

    Beria, Harsh; Nanda, Trushnamayee; Singh Bisht, Deepak; Chatterjee, Chandranath

    2017-12-01

    The last couple of decades have seen the outburst of a number of satellite-based precipitation products with Tropical Rainfall Measuring Mission (TRMM) as the most widely used for hydrologic applications. Transition of TRMM into the Global Precipitation Measurement (GPM) promises enhanced spatio-temporal resolution along with upgrades to sensors and rainfall estimation techniques. The dependence of systematic error components in rainfall estimates of the Integrated Multi-satellitE Retrievals for GPM (IMERG), and their variation with climatology and topography, was evaluated over 86 basins in India for year 2014 and compared with the corresponding (2014) and retrospective (1998-2013) TRMM estimates. IMERG outperformed TRMM for all rainfall intensities across a majority of Indian basins, with significant improvement in low rainfall estimates showing smaller negative biases in 75 out of 86 basins. Low rainfall estimates in TRMM showed a systematic dependence on basin climatology, with significant overprediction in semi-arid basins, which gradually improved in the higher rainfall basins. Medium and high rainfall estimates of TRMM exhibited a strong dependence on basin topography, with declining skill in higher elevation basins. The systematic dependence of error components on basin climatology and topography was reduced in IMERG, especially in terms of topography. Rainfall-runoff modeling using the Variable Infiltration Capacity (VIC) model over two flood-prone basins (Mahanadi and Wainganga) revealed that improvement in rainfall estimates in IMERG did not translate into improvement in runoff simulations. More studies are required over basins in different hydroclimatic zones to evaluate the hydrologic significance of IMERG.

  13. Does the GPM mission improve the systematic error component in satellite rainfall estimates over TRMM? An evaluation at a pan-India scale

    Directory of Open Access Journals (Sweden)

    H. Beria

    2017-12-01

    Full Text Available The last couple of decades have seen the outburst of a number of satellite-based precipitation products with Tropical Rainfall Measuring Mission (TRMM as the most widely used for hydrologic applications. Transition of TRMM into the Global Precipitation Measurement (GPM promises enhanced spatio-temporal resolution along with upgrades to sensors and rainfall estimation techniques. The dependence of systematic error components in rainfall estimates of the Integrated Multi-satellitE Retrievals for GPM (IMERG, and their variation with climatology and topography, was evaluated over 86 basins in India for year 2014 and compared with the corresponding (2014 and retrospective (1998–2013 TRMM estimates. IMERG outperformed TRMM for all rainfall intensities across a majority of Indian basins, with significant improvement in low rainfall estimates showing smaller negative biases in 75 out of 86 basins. Low rainfall estimates in TRMM showed a systematic dependence on basin climatology, with significant overprediction in semi-arid basins, which gradually improved in the higher rainfall basins. Medium and high rainfall estimates of TRMM exhibited a strong dependence on basin topography, with declining skill in higher elevation basins. The systematic dependence of error components on basin climatology and topography was reduced in IMERG, especially in terms of topography. Rainfall-runoff modeling using the Variable Infiltration Capacity (VIC model over two flood-prone basins (Mahanadi and Wainganga revealed that improvement in rainfall estimates in IMERG did not translate into improvement in runoff simulations. More studies are required over basins in different hydroclimatic zones to evaluate the hydrologic significance of IMERG.

  14. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  15. Validation of the calculation of the renal impulse response function. An analysis of errors and systematic biases

    International Nuclear Information System (INIS)

    Erbsman, F.; Ham, H.; Piepsz, A.; Struyven, J.

    1978-01-01

    The renal impulse response function (Renal IRF) is the time-activity curve measured over one kidney after injection of a radiopharmaceutical in the renal artery. If the tracer is injected intravenously it is possible to compute the renal IRF by deconvoluting the kidney curve by a blood curve. In previous work we demonstrated that the computed IRF is in good agreement with measurements made after injection in the renal artery. The goal of the present work is the analysis of the effect of sampling errors and the influence of extra-renal activity. The sampling error is only important for the first point of the plasma curve and yields an ill-conditioned function P -1 . The addition of 50 computed renal IRF's demonstrated that the three first points show a larger variability due to incomplete mixing of the tracer. These points should thus not be included in the smoothing process. Subtraction of non-renal activity does not modify appreciably the shape of the renal IRF. The mean transit time and the time to half value are almost independent of non-renal activity and seem to be the parameters of choice

  16. A systematic review of accidental injury from fire, wandering and medication self-administration errors for older adults with and without dementia.

    Science.gov (United States)

    Douglas, Alison; Letts, Lori; Richardson, Julie

    2011-01-01

    The assessment of risk of injury in the home is important for older adults when considering whether they are able to live independently. The purpose of this systematic review is to determine the frequency of injury for persons with dementia and the general older adult population, from three sources: fires/burns, medication self-administration errors and wandering. Relevant articles (n=74) were screened and 16 studies were retained for independent review. The studies, although subject to selection and information bias, showed low proportions of morbidity and mortality from the three sources of injury. Data did not allow direct comparison of morbidity and mortality for persons with dementia and the general older adult population; however, data trends suggested greater event frequencies with medication self-administration and wandering for persons with dementia. Assessment targeting these sources of injury should have less emphasis in the general older adult population compared to persons with dementia. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  17. On the isobaric space of 25-hydroxyvitamin D in human serum: potential for interferences in liquid chromatography/tandem mass spectrometry, systematic errors and accuracy issues.

    Science.gov (United States)

    Qi, Yulin; Geib, Timon; Schorr, Pascal; Meier, Florian; Volmer, Dietrich A

    2015-01-15

    Isobaric interferences in human serum can potentially influence the measured concentration levels of 25-hydroxyvitamin D [25(OH)D], when low resolving power liquid chromatography/tandem mass spectrometry (LC/MS/MS) instruments and non-specific MS/MS product ions are employed for analysis. In this study, we provide a detailed characterization of these interferences and a technical solution to reduce the associated systematic errors. Detailed electrospray ionization Fourier transform ion cyclotron resonance (FTICR) high-resolution mass spectrometry (HRMS) experiments were used to characterize co-extracted isobaric components of 25(OH)D from human serum. Differential ion mobility spectrometry (DMS), as a gas-phase ion filter, was implemented on a triple quadrupole mass spectrometer for separation of the isobars. HRMS revealed the presence of multiple isobaric compounds in extracts of human serum for different sample preparation methods. Several of these isobars had the potential to increase the peak areas measured for 25(OH)D on low-resolution MS instruments. A major isobaric component was identified as pentaerythritol oleate, a technical lubricant, which was probably an artifact from the analytical instrumentation. DMS was able to remove several of these isobars prior to MS/MS, when implemented on the low-resolution triple quadrupole mass spectrometer. It was shown in this proof-of-concept study that DMS-MS has the potential to significantly decrease systematic errors, and thus improve accuracy of vitamin D measurements using LC/MS/MS. Copyright © 2014 John Wiley & Sons, Ltd.

  18. Study of systematic errors in the determination of total Hg levels in the range -5% in inorganic and organic matrices with two reliable spectrometrical determination procedures

    International Nuclear Information System (INIS)

    Kaiser, G.; Goetz, D.; Toelg, G.; Max-Planck-Institut fuer Metallforschung, Stuttgart; Knapp, G.; Maichin, B.; Spitzy, H.

    1978-01-01

    In the determiniation of Hg at ng/g and pg/g levels systematic errors are due to faults in the analytical methods such as intake, preparation and decomposition of a sample. The sources of these errors have been studied both with 203 Hg-radiotracer techniques and two multi-stage procedures developed for the determiniation of trace levels. The emission spectrometrie (OES-MIP) procedure includes incineration of the sample in a microwave induced oxygen plasma (MIP), the isolation and enrichment on a gold absorbent and its excitation in an argon plasma (MIP). The emitted Hg-radiation (253,7 nm) is evaluated photometrically with a semiconductor element. The detection limit of the OES-MIP procedure was found to be 0,01 ng, the coefficient of variation 5% for 1 ng Hg. The second procedure combines a semi-automated wet digestion method (HCLO 3 /HNO 3 ) with a reduction-aeration (ascorbic acid/SnCl 2 ), and the flameless atomic absorption technique (253,7 nm). The detection limit of this procedure was found to be 0,5 ng, the coefficient of variation 5% for 5 ng Hg. (orig.) [de

  19. Systematic errors in detecting biased agonism: Analysis of current methods and development of a new model-free approach.

    Science.gov (United States)

    Onaran, H Ongun; Ambrosio, Caterina; Uğur, Özlem; Madaras Koncz, Erzsebet; Grò, Maria Cristina; Vezzi, Vanessa; Rajagopal, Sudarshan; Costa, Tommaso

    2017-03-14

    Discovering biased agonists requires a method that can reliably distinguish the bias in signalling due to unbalanced activation of diverse transduction proteins from that of differential amplification inherent to the system being studied, which invariably results from the non-linear nature of biological signalling networks and their measurement. We have systematically compared the performance of seven methods of bias diagnostics, all of which are based on the analysis of concentration-response curves of ligands according to classical receptor theory. We computed bias factors for a number of β-adrenergic agonists by comparing BRET assays of receptor-transducer interactions with Gs, Gi and arrestin. Using the same ligands, we also compared responses at signalling steps originated from the same receptor-transducer interaction, among which no biased efficacy is theoretically possible. In either case, we found a high level of false positive results and a general lack of correlation among methods. Altogether this analysis shows that all tested methods, including some of the most widely used in the literature, fail to distinguish true ligand bias from "system bias" with confidence. We also propose two novel semi quantitative methods of bias diagnostics that appear to be more robust and reliable than currently available strategies.

  20. Systematic Review of Ultrasonic Impact Treatment Parameters on Residual Stresses of Welded Non-Sensitized Versus Sensitized Aluminum-Magnesium

    Science.gov (United States)

    2015-03-01

    cycles of wet and dry exposure in this environment, in which concentrated amounts of chloride (dried seawater ) are in contact with the aluminum...around welds in AA5083 installed on-board a U.S. naval combatant and in AA5083 after in situ smface preparat ion . In the AA5456, we examined the...17 1. Systematically Ultrasonic Impact Treated, Gas Metal Arc Welded, Aluminum-Alloy 5456 Plates

  1. Random and systematic sampling error when hooking fish to monitor skin fluke (Benedenia seriolae) and gill fluke (Zeuxapta seriolae) burden in Australian farmed yellowtail kingfish (Seriola lalandi).

    Science.gov (United States)

    Fensham, J R; Bubner, E; D'Antignana, T; Landos, M; Caraguel, C G B

    2018-05-01

    The Australian farmed yellowtail kingfish (Seriola lalandi, YTK) industry monitor skin fluke (Benedenia seriolae) and gill fluke (Zeuxapta seriolae) burden by pooling the fluke count of 10 hooked YTK. The random and systematic error of this sampling strategy was evaluated to assess potential impact on treatment decisions. Fluke abundance (fluke count per fish) in a study cage (estimated 30,502 fish) was assessed five times using the current sampling protocol and its repeatability was estimated the repeatability coefficient (CR) and the coefficient of variation (CV). Individual body weight, fork length, fluke abundance, prevalence, intensity (fluke count per infested fish) and density (fluke count per Kg of fish) were compared between 100 hooked and 100 seined YTK (assumed representative of the entire population) to estimate potential selection bias. Depending on the fluke species and age category, CR (expected difference in parasite count between 2 sampling iterations) ranged from 0.78 to 114 flukes per fish. Capturing YTK by hooking increased the selection of fish of a weight and length in the lowest 5th percentile of the cage (RR = 5.75, 95% CI: 2.06-16.03, P-value = 0.0001). These lower end YTK had on average an extra 31 juveniles and 6 adults Z. seriolae per Kg of fish and an extra 3 juvenile and 0.4 adult B. seriolae per Kg of fish, compared to the rest of the cage population (P-value sampling towards the smallest and most heavily infested fish in the population, resulting in poor repeatability (more variability amongst sampled fish) and an overestimation of parasite burden in the population. In this particular commercial situation these finding supported that health management program, where the finding of an underestimation of parasite burden could provide a production impact on the study population. In instances where fish populations and parasite burdens are more homogenous, sampling error may be less severe. Sampling error when capturing fish

  2. INTERVENTIONS TO MANAGE RESIDUAL LIMB ULCERATION DUE TO PROSTHETIC USE IN INDIVIDUALS WITH LOWER EXTREMITY AMPUTATION:A SYSTEMATIC REVIEW OF THE LITERATURE.

    Science.gov (United States)

    Highsmith, M Jason; Kahle, Jason T; Klenow, Tyler D; Andrews, Casey R; Lewis, Katherine L; Bradley, Rachel C; Ward, Jessica M; Orriola, John J; Highsmith, James T

    2016-09-01

    Patients with lower extremity amputation (LEA) experience 65% more dermatologic issues than non-amputees, and skin problems are experienced by ≈75% of LEA patients who use prostheses. Continuously referring LEA patients to a dermatologist for every stump related skin condition may be impractical. Thus, physical rehabilitation professionals should be prepared to recognize and manage common non-emergent skin conditions in this population. The purpose of this study was to determine the quantity, quality, and strength of available evidence supporting treatment methods for prosthesis-related residual limb (RL) ulcers. Systematic literature review with evidence grading and synthesis of empirical evidence statements (EES) was employed. Three EESs were formulated describing ulcer etiology, conditions in which prosthetic continuance is practical, circumstances likely requiring prosthetic discontinuance, and the consideration of additional medical or surgical interventions. Continued prosthetic use is a viable option to manage minor or early-stage ulcerated residual limbs in compliant patients lacking multiple comorbidities. Prosthetic discontinuance is also a viable method of residual limb ulcer healing and may be favored in the presence of severe acute ulcerations, chronic heavy smoking, intractable pain, rapid volume and weight change, history of chronic ulceration, systemic infections, or advanced dysvascular etiology. Surgery or other interventions may also be necessary in such cases to achieve restored prosthetic ambulation. A short bout of prosthetic discontinuance with a staged re-introduction plan is another viable option that may be warranted in patients with ulceration due to poor RL volume management. High-quality prospective research with larger samples is needed to determine the most appropriate course of treatment when a person with LEA develops an RL ulcer that is associated with prosthetic use.

  3. A Low-Cost Environmental Monitoring System: How to Prevent Systematic Errors in the Design Phase through the Combined Use of Additive Manufacturing and Thermographic Techniques.

    Science.gov (United States)

    Salamone, Francesco; Danza, Ludovico; Meroni, Italo; Pollastro, Maria Cristina

    2017-04-11

    nEMoS (nano Environmental Monitoring System) is a 3D-printed device built following the Do-It-Yourself (DIY) approach. It can be connected to the web and it can be used to assess indoor environmental quality (IEQ). It is built using some low-cost sensors connected to an Arduino microcontroller board. The device is assembled in a small-sized case and both thermohygrometric sensors used to measure the air temperature and relative humidity, and the globe thermometer used to measure the radiant temperature, can be subject to thermal effects due to overheating of some nearby components. A thermographic analysis was made to rule out this possibility. The paper shows how the pervasive technique of additive manufacturing can be combined with the more traditional thermographic techniques to redesign the case and to verify the accuracy of the optimized system in order to prevent instrumental systematic errors in terms of the difference between experimental and actual values of the above-mentioned environmental parameters.

  4. Evaluation of Stability of Complexes of Inner Transition Metal Ions with 2-Oxo-1-pyrrolidine Acetamide and Role of Systematic Errors

    Directory of Open Access Journals (Sweden)

    Sangita Sharma

    2011-01-01

    Full Text Available BEST FIT models were used to study the complexation of inner transition metal ions like Y(III, La(III, Ce(III, Pr(III, Nd(III, Sm(III, Gd(III, Dy(III and Th(IV with 2-oxo-1-pyrrolidine acetamide at 30 °C in 10%, 20, 30, 40, 50% and 60% v/v dioxane-water mixture at 0.2 M ionic strength. Irving Rossotti titration method was used to get titration data. Calculations were carried out with PKAS and BEST Fortran IV computer programs. The expected species like L, LH+, ML, ML2 and ML(OH3, were obtained with SPEPLOT. Stability of complexes has increased with increasing the dioxane content. The observed change in stability can be explained on the basis of electrostatic effects, non electrostatic effects, solvating power of solvent mixture, interaction between ions and interaction of ions with solvents. Effect of systematic errors like effect of dissolved carbon dioxide, concentration of alkali, concentration of acid, concentration of ligand and concentration of metal have also been explained here.

  5. Uncertainty quantification and error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  6. Proposed systematic methodology for analysis of Pb-210 radioactivity in residues produced in Brazilian natural gas pipes

    International Nuclear Information System (INIS)

    Ferreira, Aloisio Cordilha

    2003-11-01

    Since the 80's, the potential radiological hazards due to the handling of solid wastes contaminated with Rn-222 long-lived progeny - Pb-210 in special - produced in gas pipes and removed by pig operations have been subject of growing concern abroad our country. Nevertheless, little or no attention has been paid to this matter in the Brazilian plants up to now, being these hazards frequently underestimated or even ignored. The main purpose of this work was to propose a systematic methodology for analysis of Pb-210 radioactivity in black powder samples from some Brazilian plants, through the evaluation of direct Pb-210 gamma spectrometry and Bi-210 beta counting technical viabilities. In both cases, one in five samples of black powder analysed showed relevant activity (above 1Bq/kg) of Pb-210, being these results probably related to particular features of each specific plant (production levels, reservoir geochemical profile, etc.), in such a way that a single pattern is not observed. For the proposed methodology, gamma spectrometry proved to be the most reliable technique, showing a 3.5% standard deviation, and, for a 95% confidence level, overall fitness in the range of Pb-210 concentration of activity presented in the standard sample reference sheet, provided by IAEA for intercomparison purposes. In the Brazilian scene, however, the availability of statistically supported evidences is insufficient to allow the potential radiological hazard due to the management of black powder to be discarded. Thus, further research efforts are recommended in order to detect the eventually critical regions or plants where gas exploration, production and processing practices will require a regular program of radiological surveillance, in the near future. (author)

  7. Residual deposits (residual soil)

    International Nuclear Information System (INIS)

    Khasanov, A.Kh.

    1988-01-01

    Residual soil deposits is accumulation of new formate ore minerals on the earth surface, arise as a result of chemical decomposition of rocks. As is well known, at the hyper genes zone under the influence of different factors (water, carbonic acid, organic acids, oxygen, microorganism activity) passes chemical weathering of rocks. Residual soil deposits forming depends from complex of geologic and climatic factors and also from composition and physical and chemical properties of initial rocks

  8. Average beta-beating from random errors

    CERN Document Server

    Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department

    2018-01-01

    The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic effect on the tune.

  9. Medication Errors

    Science.gov (United States)

    ... for You Agency for Healthcare Research and Quality: Medical Errors and Patient Safety Centers for Disease Control and ... Quality Chasm Series National Coordinating Council for Medication Error Reporting and Prevention ... Devices Radiation-Emitting Products Vaccines, Blood & Biologics Animal & ...

  10. Systematic errors in digital volume correlation due to the self-heating effect of a laboratory x-ray CT scanner

    KAUST Repository

    Wang, B

    2017-02-15

    The use of digital volume correlation (DVC) in combination with a laboratory x-ray computed tomography (CT) for full-field internal 3D deformation measurement of opaque materials has flourished in recent years. During x-ray tomographic imaging, the heat generated by the x-ray tube changes the imaging geometry of x-ray scanner, and further introduces noticeable errors in DVC measurements. In this work, to provide practical guidance high-accuracy DVC measurement, the errors in displacements and strains measured by DVC due to the self-heating for effect of a commercially available x-ray scanner were experimentally investigated. The errors were characterized by performing simple rescan tests with different scan durations. The results indicate that the maximum strain errors associated with the self-heating of the x-ray scanner exceed 400 µε. Possible approaches for minimizing or correcting these displacement and strain errors are discussed. Finally, a series of translation and uniaxial compression tests were performed, in which strain errors were detected and then removed using pre-established artificial dilatational strain-time curve. Experimental results demonstrate the efficacy and accuracy of the proposed strain error correction approach.

  11. Systematic errors in digital volume correlation due to the self-heating effect of a laboratory x-ray CT scanner

    International Nuclear Information System (INIS)

    Wang, B; Pan, B; Tao, R; Lubineau, G

    2017-01-01

    The use of digital volume correlation (DVC) in combination with a laboratory x-ray computed tomography (CT) for full-field internal 3D deformation measurement of opaque materials has flourished in recent years. During x-ray tomographic imaging, the heat generated by the x-ray tube changes the imaging geometry of x-ray scanner, and further introduces noticeable errors in DVC measurements. In this work, to provide practical guidance high-accuracy DVC measurement, the errors in displacements and strains measured by DVC due to the self-heating for effect of a commercially available x-ray scanner were experimentally investigated. The errors were characterized by performing simple rescan tests with different scan durations. The results indicate that the maximum strain errors associated with the self-heating of the x-ray scanner exceed 400 µε . Possible approaches for minimizing or correcting these displacement and strain errors are discussed. Finally, a series of translation and uniaxial compression tests were performed, in which strain errors were detected and then removed using pre-established artificial dilatational strain-time curve. Experimental results demonstrate the efficacy and accuracy of the proposed strain error correction approach. (paper)

  12. Operator errors

    International Nuclear Information System (INIS)

    Knuefer; Lindauer

    1980-01-01

    Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)

  13. Southern Hemisphere Application of the Systematic Approach to Tropical Cyclone Forecasting Part IV: Sources of Large Track Errors by Dynamical Models

    National Research Council Canada - National Science Library

    Reader, Grahame

    2000-01-01

    Sources of 72-h track errors> 300 n mi by four dynamical model tropical cyclone predictions in the Southern Hemisphere during the 1997-98 and 1998-99 seasons are studied using conceptual models Carr and Elsberry have previously...

  14. Southern Hemisphere Application of the Systematic Approach to Tropical Cyclone Forecasting Part 4: Sources of Large Track Errors by Dynamical Models

    National Research Council Canada - National Science Library

    Reader, Grahame

    2000-01-01

    Sources of 72-h track errors > 300 n mi by four dynamical model tropical cyclone predictions in the Southern Hemisphere during the 1997-98 and 1998-99 seasons are studied using conceptual models Carr and Elsberry have previously...

  15. Human Error in Pilotage Operations

    Directory of Open Access Journals (Sweden)

    Jørgen Ernstsen

    2018-03-01

    Full Text Available Pilotage operations require close interaction between human and machines. This complex sociotechnical system is necessary to safely and efficiently maneuver a vessel in constrained waters. A sociotechnical system consists of interdependent human- and technical variables that continuously must work together to be successful. This complexity is prone to errors, and statistics show that most these errors in the maritime domain are due to human components in the system (80 ? 85%. This explains the attention on research to reduce human errors. The current study deployed a systematic human error reduction and prediction approach (SHERPA to shed light on error types and error remedies apparent in pilotage operations. Data was collected using interviews and observation. Hierarchical task analysis was performed and 55 tasks were analyzed using SHERPA. Findings suggests that communication and action omission errors are most prone to human errors in pilotage operations. Practical and theoretical implications of the results are discussed.

  16. Error estimation, validity and best practice guidelines for quantifying coalescence frequency during emulsification using the step-down technique

    Directory of Open Access Journals (Sweden)

    Andreas Håkansson

    2017-07-01

    This contribution derives error estimates for three non-idealities present in every step-down experiment: i limited sampling rate, ii non-instantaneous step-down and iii residual fragmentation after the step. It is concluded that all three factors give rise to systematic errors in estimating coalescence rate. However, by carefully choosing experimental settings, the errors can be kept small. The method, thus, remains suitable for many conditions. Best practice guidelines for applying the method are given, both generally, and more specifically for stirred tank oil-in-water emulsification.

  17. Learning from Errors

    Directory of Open Access Journals (Sweden)

    MA. Lendita Kryeziu

    2015-06-01

    Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.

  18. Medical error

    African Journals Online (AJOL)

    QuickSilver

    is only when mistakes are recognised that learning can occur...All our previous medical training has taught us to fear error, as error is associated with blame. This fear may lead to concealment and this is turn can lead to fraud'. How real this fear is! All of us, during our medical training, have had the maxim 'prevention is.

  19. Systematic review of ERP and fMRI studies investigating inhibitory control and error processing in people with substance dependence and behavioural addictions

    NARCIS (Netherlands)

    Luijten, M.; Machielsen, M.W.J.; Veltman, D.J.; Hester, R.; de Haan, L.; Franken, I.H.A.

    2014-01-01

    Background: Several current theories emphasize the role of cognitive control in addiction. The present review evaluates neural deficits in the domains of inhibitory control and error processing in individuals with substance dependence and in those showing excessive addiction-like behaviours. The

  20. Einstein's error

    International Nuclear Information System (INIS)

    Winterflood, A.H.

    1980-01-01

    In discussing Einstein's Special Relativity theory it is claimed that it violates the principle of relativity itself and that an anomalous sign in the mathematics is found in the factor which transforms one inertial observer's measurements into those of another inertial observer. The apparent source of this error is discussed. Having corrected the error a new theory, called Observational Kinematics, is introduced to replace Einstein's Special Relativity. (U.K.)

  1. Measurement Error in Education and Growth Regressions

    NARCIS (Netherlands)

    Portela, M.; Teulings, C.N.; Alessie, R.

    The perpetual inventory method used for the construction of education data per country leads to systematic measurement error. This paper analyses the effect of this measurement error on GDP regressions. There is a systematic difference in the education level between census data and observations

  2. Measurement error in education and growth regressions

    NARCIS (Netherlands)

    Portela, Miguel; Teulings, Coen; Alessie, R.

    2004-01-01

    The perpetual inventory method used for the construction of education data per country leads to systematic measurement error. This paper analyses the effect of this measurement error on GDP regressions. There is a systematic difference in the education level between census data and observations

  3. Exact Solutions for Internuclear Vectors and Backbone Dihedral Angles from NH Residual Dipolar Couplings in Two Media, and their Application in a Systematic Search Algorithm for Determining Protein Backbone Structure

    International Nuclear Information System (INIS)

    Wang Lincong; Donald, Bruce Randall

    2004-01-01

    We have derived a quartic equation for computing the direction of an internuclear vector from residual dipolar couplings (RDCs) measured in two aligning media, and two simple trigonometric equations for computing the backbone (φ,ψ) angles from two backbone vectors in consecutive peptide planes. These equations make it possible to compute, exactly and in constant time, the backbone (φ,ψ) angles for a residue from RDCs in two media on any single backbone vector type. Building upon these exact solutions we have designed a novel algorithm for determining a protein backbone substructure consisting of α-helices and β-sheets. Our algorithm employs a systematic search technique to refine the conformation of both α-helices and β-sheets and to determine their orientations using exclusively the angular restraints from RDCs. The algorithm computes the backbone substructure employing very sparse distance restraints between pairs of α-helices and β-sheets refined by the systematic search. The algorithm has been demonstrated on the protein human ubiquitin using only backbone NH RDCs, plus twelve hydrogen bonds and four NOE distance restraints. Further, our results show that both the global orientations and the conformations of α-helices and β-strands can be determined with high accuracy using only two RDCs per residue. The algorithm requires, as its input, backbone resonance assignments, the identification of α-helices and β-sheets as well as sparse NOE distance and hydrogen bond restraints.Abbreviations: NMR - nuclear magnetic resonance; RDC - residual dipolar coupling; NOE - nuclear Overhauser effect; SVD - singular value decomposition; DFS - depth-first search; RMSD - root mean square deviation; POF - principal order frame; PDB - protein data bank; SA - simulated annealing; MD - molecular dynamics

  4. Estimating climate model systematic errors in a climate change impact study of the Okavango River basin, southwestern Africa using a mesoscale model

    Science.gov (United States)

    Raghavan, S. V.; Todd, M.

    2007-12-01

    Simulating the impact of future climate variability and change on hydrological systems requires estimates of climate at high spatial resolution compatible with hydrological models. Here we present initial results of a project to simulate future climate over the Okavango River basin and delta in Southwestern Africa. Given the significance of the delta to biodiversity and as a resource to the local population, there is considerable concern regarding the sensitivity of the system to future climate change. An important component of climate variability/change impact studies is an assessment of errors in the modeling suite. Here, we attempt to quantify errors and uncertainties involved in regional climate modelling that will impact on hydrological simulations. The study determines the ability of the MM5 Regional Climate Model to simulate the present day regional climate at the high resolution required by the hydrological models and the effectiveness of the RCM in downscaling GCM outputs to study regional climate change and impacts.

  5. Error Analysis and the EFL Classroom Teaching

    Science.gov (United States)

    Xie, Fang; Jiang, Xue-mei

    2007-01-01

    This paper makes a study of error analysis and its implementation in the EFL (English as Foreign Language) classroom teaching. It starts by giving a systematic review of the concepts and theories concerning EA (Error Analysis), the various reasons causing errors are comprehensively explored. The author proposes that teachers should employ…

  6. The Usability-Error Ontology

    DEFF Research Database (Denmark)

    Elkin, Peter L.; Beuscart-zephir, Marie-Catherine; Pelayo, Sylvia

    2013-01-01

    in patients coming to harm. Often the root cause analysis of these adverse events can be traced back to Usability Errors in the Health Information Technology (HIT) or its interaction with users. Interoperability of the documentation of HIT related Usability Errors in a consistent fashion can improve our...... ability to do systematic reviews and meta-analyses. In an effort to support improved and more interoperable data capture regarding Usability Errors, we have created the Usability Error Ontology (UEO) as a classification method for representing knowledge regarding Usability Errors. We expect the UEO...... will grow over time to support an increasing number of HIT system types. In this manuscript, we present this Ontology of Usability Error Types and specifically address Computerized Physician Order Entry (CPOE), Electronic Health Records (EHR) and Revenue Cycle HIT systems....

  7. Refractive Errors

    Science.gov (United States)

    ... Conditions Frequently Asked Questions Español Condiciones Chinese Conditions Refractive Errors in Children En Español Read in Chinese How does the ... birth and can occur at any age. The prevalence of myopia is low in US children under the age of eight, but much higher ...

  8. Residuation theory

    CERN Document Server

    Blyth, T S; Sneddon, I N; Stark, M

    1972-01-01

    Residuation Theory aims to contribute to literature in the field of ordered algebraic structures, especially on the subject of residual mappings. The book is divided into three chapters. Chapter 1 focuses on ordered sets; directed sets; semilattices; lattices; and complete lattices. Chapter 2 tackles Baer rings; Baer semigroups; Foulis semigroups; residual mappings; the notion of involution; and Boolean algebras. Chapter 3 covers residuated groupoids and semigroups; group homomorphic and isotone homomorphic Boolean images of ordered semigroups; Dubreil-Jacotin and Brouwer semigroups; and loli

  9. Error estimation and global fitting in transverse-relaxation dispersion experiments to determine chemical-exchange parameters

    International Nuclear Information System (INIS)

    Ishima, Rieko; Torchia, Dennis A.

    2005-01-01

    Off-resonance effects can introduce significant systematic errors in R 2 measurements in constant-time Carr-Purcell-Meiboom-Gill (CPMG) transverse relaxation dispersion experiments. For an off-resonance chemical shift of 500 Hz, 15 N relaxation dispersion profiles obtained from experiment and computer simulation indicated a systematic error of ca. 3%. This error is three- to five-fold larger than the random error in R 2 caused by noise. Good estimates of total R 2 uncertainty are critical in order to obtain accurate estimates in optimized chemical exchange parameters and their uncertainties derived from χ 2 minimization of a target function. Here, we present a simple empirical approach that provides a good estimate of the total error (systematic + random) in 15 N R 2 values measured for the HIV protease. The advantage of this empirical error estimate is that it is applicable even when some of the factors that contribute to the off-resonance error are not known. These errors are incorporated into a χ 2 minimization protocol, in which the Carver-Richards equation is used fit the observed R 2 dispersion profiles, that yields optimized chemical exchange parameters and their confidence limits. Optimized parameters are also derived, using the same protein sample and data-fitting protocol, from 1 H R 2 measurements in which systematic errors are negligible. Although 1 H and 15 N relaxation profiles of individual residues were well fit, the optimized exchange parameters had large uncertainties (confidence limits). In contrast, when a single pair of exchange parameters (the exchange lifetime, τ ex , and the fractional population, p a ), were constrained to globally fit all R 2 profiles for residues in the dimer interface of the protein, confidence limits were less than 8% for all optimized exchange parameters. In addition, F-tests showed that quality of the fits obtained using τ ex , p a as global parameters were not improved when these parameters were free to fit the R

  10. Effect of Neutral-pH, Low-Glucose Degradation Product Peritoneal Dialysis Solutions on Residual Renal Function, Urine Volume, and Ultrafiltration: A Systematic Review and Meta-Analysis.

    Science.gov (United States)

    Yohanna, Seychelle; Alkatheeri, Ali M A; Brimble, Scott K; McCormick, Brendan; Iansavitchous, Arthur; Blake, Peter G; Jain, Arsh K

    2015-08-07

    Neutral-pH, low-glucose degradation products solutions were developed in an attempt to lessen the adverse effects of conventional peritoneal dialysis solutions. A systematic review was performed evaluating the effect of these solutions on residual renal function, urine volume, peritoneal ultrafiltration, and peritoneal small-solute transport (dialysate to plasma creatinine ratio) over time. Multiple electronic databases were searched from January of 1995 to January of 2013. Randomized trials reporting on any of four prespecified outcomes were selected by consensus among multiple reviewers. Eleven trials of 643 patients were included. Trials were generally of poor quality. The meta-analysis was performed using a random effects model. The use of neutral-pH, low-glucose degradation products solutions resulted in better preserved residual renal function at various study durations, including >1 year (combined analysis: 11 studies; 643 patients; standardized mean difference =0.17 ml/min; 95% confidence interval, 0.01 to 0.32), and greater urine volumes (eight studies; 598 patients; mean difference =128 ml/d; 95% confidence interval, 58 to 198). There was no significant difference in peritoneal ultrafiltration (seven studies; 571 patients; mean difference =-110; 95% confidence interval, -312 to 91) or dialysate to plasma creatinine ratio (six studies; 432 patients; mean difference =0.03; 95% confidence interval, 0.00 to 0.06). The use of neutral-pH, low-glucose degradation products solutions results in better preservation of residual renal function and greater urine volumes. The effect on residual renal function occurred early and persisted beyond 12 months. Additional studies are required to evaluate the use of neutral-pH, low-glucose degradation products solutions on hard clinical outcomes. Copyright © 2015 by the American Society of Nephrology.

  11. Error Mitigation in Computational Design of Sustainable Energy Materials

    DEFF Research Database (Denmark)

    Christensen, Rune

    Transportation based on sustainable energy requires an energy carrier, which is able to store the predominately electrical energy generated from sustainable sources in a high energy density form. Metal-air batteries, hydrogen and synthetic fuels are possible future energy carriers. Density...... if not for the systematic errors. In this thesis it is shown how the systematic errors can be mitigated. For different alkali and alkaline earth metal oxides, systematic errors have previously been observed. These errors are primarily caused by differences in metal element oxidation state. The systematic errors can...

  12. Inborn Errors of Metabolism.

    Science.gov (United States)

    Ezgu, Fatih

    2016-01-01

    Inborn errors of metabolism are single gene disorders resulting from the defects in the biochemical pathways of the body. Although these disorders are individually rare, collectively they account for a significant portion of childhood disability and deaths. Most of the disorders are inherited as autosomal recessive whereas autosomal dominant and X-linked disorders are also present. The clinical signs and symptoms arise from the accumulation of the toxic substrate, deficiency of the product, or both. Depending on the residual activity of the deficient enzyme, the initiation of the clinical picture may vary starting from the newborn period up until adulthood. Hundreds of disorders have been described until now and there has been a considerable clinical overlap between certain inborn errors. Resulting from this fact, the definite diagnosis of inborn errors depends on enzyme assays or genetic tests. Especially during the recent years, significant achievements have been gained for the biochemical and genetic diagnosis of inborn errors. Techniques such as tandem mass spectrometry and gas chromatography for biochemical diagnosis and microarrays and next-generation sequencing for the genetic diagnosis have enabled rapid and accurate diagnosis. The achievements for the diagnosis also enabled newborn screening and prenatal diagnosis. Parallel to the development the diagnostic methods; significant progress has also been obtained for the treatment. Treatment approaches such as special diets, enzyme replacement therapy, substrate inhibition, and organ transplantation have been widely used. It is obvious that by the help of the preclinical and clinical research carried out for inborn errors, better diagnostic methods and better treatment approaches will high likely be available. © 2016 Elsevier Inc. All rights reserved.

  13. Statistical errors and systematic biases in the calibration of the convective core overshooting with eclipsing binaries. A case study: TZ Fornacis

    Science.gov (United States)

    Valle, G.; Dell'Omodarme, M.; Prada Moroni, P. G.; Degl'Innocenti, S.

    2017-04-01

    Context. Recently published work has made high-precision fundamental parameters available for the binary system TZ Fornacis, making it an ideal target for the calibration of stellar models. Aims: Relying on these observations, we attempt to constrain the initial helium abundance, the age and the efficiency of the convective core overshooting. Our main aim is in pointing out the biases in the results due to not accounting for some sources of uncertainty. Methods: We adopt the SCEPtER pipeline, a maximum likelihood technique based on fine grids of stellar models computed for various values of metallicity, initial helium abundance and overshooting efficiency by means of two independent stellar evolutionary codes, namely FRANEC and MESA. Results: Beside the degeneracy between the estimated age and overshooting efficiency, we found the existence of multiple independent groups of solutions. The best one suggests a system of age 1.10 ± 0.07 Gyr composed of a primary star in the central helium burning stage and a secondary in the sub-giant branch (SGB). The resulting initial helium abundance is consistent with a helium-to-metal enrichment ratio of ΔY/ ΔZ = 1; the core overshooting parameter is β = 0.15 ± 0.01 for FRANEC and fov = 0.013 ± 0.001 for MESA. The second class of solutions, characterised by a worse goodness-of-fit, still suggest a primary star in the central helium-burning stage but a secondary in the overall contraction phase, at the end of the main sequence (MS). In this case, the FRANEC grid provides an age of Gyr and a core overshooting parameter , while the MESA grid gives 1.23 ± 0.03 Gyr and fov = 0.025 ± 0.003. We analyse the impact on the results of a larger, but typical, mass uncertainty and of neglecting the uncertainty in the initial helium content of the system. We show that very precise mass determinations with uncertainty of a few thousandths of solar mass are required to obtain reliable determinations of stellar parameters, as mass errors

  14. Residue processing

    Energy Technology Data Exchange (ETDEWEB)

    Gieg, W.; Rank, V.

    1942-10-15

    In the first stage of coal hydrogenation, the liquid phase, light and heavy oils were produced; the latter containing the nonliquefied parts of the coal, the coal ash, and the catalyst substances. It was the problem of residue processing to extract from these so-called let-down oils that which could be used as pasting oils for the coal. The object was to obtain a maximum oil extraction and a complete removal of the solids, because of the latter were returned to the process they would needlessly burden the reaction space. Separation of solids in residue processing could be accomplished by filtration, centrifugation, extraction, distillation, or low-temperature carbonization (L.T.C.). Filtration or centrifugation was most suitable since a maximum oil yield could be expected from it, since only a small portion of the let-down oil contained in the filtration or centrifugation residue had to be thermally treated. The most satisfactory centrifuge at this time was the Laval, which delivered liquid centrifuge residue and centrifuge oil continuously. By comparison, the semi-continuous centrifuges delivered plastic residues which were difficult to handle. Various apparatus such as the spiral screw kiln and the ball kiln were used for low-temperature carbonization of centrifuge residues. Both were based on the idea of carbonization in thin layers. Efforts were also being made to produce electrode carbon and briquette binder as by-products of the liquid coal phase.

  15. Error budget calculations in laboratory medicine: linking the concepts of biological variation and allowable medical errors

    NARCIS (Netherlands)

    Stroobants, A. K.; Goldschmidt, H. M. J.; Plebani, M.

    2003-01-01

    Background: Random, systematic and sporadic errors, which unfortunately are not uncommon in laboratory medicine, can have a considerable impact on the well being of patients. Although somewhat difficult to attain, our main goal should be to prevent all possible errors. A good insight on error-prone

  16. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  17. Boost first, eliminate systematic error, and individualize CTV to PTV margin when treating lymph nodes in high-risk prostate cancer

    International Nuclear Information System (INIS)

    Rossi, Peter J.; Schreibmann, Eduard; Jani, Ashesh B.; Master, Viraj A.; Johnstone, Peter A.S.

    2009-01-01

    Purpose: The purpose of this report is to evaluate the movement of the planning target volume (PTV) in relation to the pelvic lymph nodes (PLNs) during treatment of high-risk prostate cancer. Patients and methods: We reviewed the daily treatment course of ten consecutively treated patients with high-risk prostate cancer. PLNs were included in the initial PTV for each patient. Daily on-board imaging of gold fiducial markers implanted in the prostate was used; daily couch shifts were made as needed and recorded. We analyzed how the daily couch shifts impacted the dose delivered to the PLN. Results: A PLN clinical target volume was identified in each man using CT-based treatment planning. At treatment planning, median minimum planned dose to the PLN was 95%, maximum 101%, and mean 97%. Daily couch shifting to prostate markers degraded the dose slightly; median minimum dose to the PLN was 92%, maximum, 101%, and mean delivered, 96%. We found two cases, where daily systematic shifts resulted in an underdosing of the PLN by 9% and 29%, respectively. In other cases, daily shifts were random and led to a mean 2.2% degradation of planned to delivered PLN dose. Conclusions: We demonstrated degradation of the delivered dose to PLN PTV, which may occur if daily alignment only to the prostate is considered. To improve PLN PTV, it maybe preferable to deliver the prostate/boost treatment first, and adapt the PTV of the pelvic/nodal treatment to uncertainties documented during prostate/boost treatment

  18. Theory of Test Translation Error

    Science.gov (United States)

    Solano-Flores, Guillermo; Backhoff, Eduardo; Contreras-Nino, Luis Angel

    2009-01-01

    In this article, we present a theory of test translation whose intent is to provide the conceptual foundation for effective, systematic work in the process of test translation and test translation review. According to the theory, translation error is multidimensional; it is not simply the consequence of defective translation but an inevitable fact…

  19. Error management process for power stations

    International Nuclear Information System (INIS)

    Hirotsu, Yuko; Takeda, Daisuke; Fujimoto, Junzo; Nagasaka, Akihiko

    2016-01-01

    The purpose of this study is to establish 'error management process for power stations' for systematizing activities for human error prevention and for festering continuous improvement of these activities. The following are proposed by deriving concepts concerning error management process from existing knowledge and realizing them through application and evaluation of their effectiveness at a power station: an entire picture of error management process that facilitate four functions requisite for maraging human error prevention effectively (1. systematizing human error prevention tools, 2. identifying problems based on incident reports and taking corrective actions, 3. identifying good practices and potential problems for taking proactive measures, 4. prioritizeng human error prevention tools based on identified problems); detail steps for each activity (i.e. developing an annual plan for human error prevention, reporting and analyzing incidents and near misses) based on a model of human error causation; procedures and example of items for identifying gaps between current and desired levels of executions and outputs of each activity; stages for introducing and establishing the above proposed error management process into a power station. By giving shape to above proposals at a power station, systematization and continuous improvement of activities for human error prevention in line with the actual situation of the power station can be expected. (author)

  20. Analysis of GRACE Range-rate Residuals with Emphasis on Reprocessed Star-Camera Datasets

    Science.gov (United States)

    Goswami, S.; Flury, J.; Naeimi, M.; Bandikova, T.; Guerr, T. M.; Klinger, B.

    2015-12-01

    Since March 2002 the two GRACE satellites orbit the Earth at rela-tively low altitude. Determination of the gravity field of the Earth including itstemporal variations from the satellites' orbits and the inter-satellite measure-ments is the goal of the mission. Yet, the time-variable gravity signal has notbeen fully exploited. This can be seen better in the computed post-fit range-rateresiduals. The errors reflected in the range-rate residuals are due to the differ-ent sources as systematic errors, mismodelling errors and tone errors. Here, weanalyse the effect of three different star-camera data sets on the post-fit range-rate residuals. On the one hand, we consider the available attitude data andon other hand we take the two different data sets which has been reprocessedat Institute of Geodesy, Hannover and Institute of Theoretical Geodesy andSatellite Geodesy, TU Graz Austria respectively. Then the differences in therange-rate residuals computed from different attitude dataset are analyzed inthis study. Details will be given and results will be discussed.

  1. Error Budgeting

    Energy Technology Data Exchange (ETDEWEB)

    Vinyard, Natalia Sergeevna [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Perry, Theodore Sonne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Usov, Igor Olegovich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-10-04

    We calculate opacity from k (hn)=-ln[T(hv)]/pL, where T(hv) is the transmission for photon energy hv, p is sample density, and L is path length through the sample. The density and path length are measured together by Rutherford backscatter. Δk = $\\partial k$\\ $\\partial T$ ΔT + $\\partial k$\\ $\\partial (pL)$. We can re-write this in terms of fractional error as Δk/k = Δ1n(T)/T + Δ(pL)/(pL). Transmission itself is calculated from T=(U-E)/(V-E)=B/B0, where B is transmitted backlighter (BL) signal and B0 is unattenuated backlighter signal. Then ΔT/T=Δln(T)=ΔB/B+ΔB0/B0, and consequently Δk/k = 1/T (ΔB/B + ΔB$_0$/B$_0$ + Δ(pL)/(pL). Transmission is measured in the range of 0.2

  2. Solow Residuals Without Capital Stocks

    DEFF Research Database (Denmark)

    Burda, Michael C.; Severgnini, Battista

    2014-01-01

    We use synthetic data generated by a prototypical stochastic growth model to assess the accuracy of the Solow residual (Solow, 1957) as a measure of total factor productivity (TFP) growth when the capital stock in use is measured with error. We propose two alternative measurements based on current...

  3. Comparison of orthogonal kilovolt X-ray images and cone-beam CT matching results in setup error assessment and correction for EB-PBI during free breathing

    International Nuclear Information System (INIS)

    Wang Wei; Li Jianbin; Hu Hongguang; Ma Zhifang; Xu Min; Fan Tingyong; Shao Qian; Ding Yun

    2014-01-01

    Objective: To compare the differences in setup error (SE) assessment and correction between the orthogonal kilovolt X-ray images and CBCT in EB-PBI patients during free breathing. Methods: Nineteen patients after breast conserving surgery EB-PBI were recruited. Interfraction SE was acquired using orthogonal kilovolt X-ray setup images and CBCT, after on-line setup correction,calculate the residual error and compare the SE, residual error and setup margin (SM) quantified for orthogonal kilovolt X-ray images and CBCT. Wilcoxon sign-rank test was used to evaluate the differences. Results: The CBCT based SE (systematic error, ∑) was smaller than the orthogonal kilovolt X-ray images based ∑ in AP direction (-1.2 mm vs 2.00 mm; P=0.005), and there was no statistically significant differences for three dimensional directions in random error (σ) (P=0.948, 0.376, 0.314). After on-line setup correction,CBCT decreases setup residual error than the orthogonal kilovolt X-ray images in AP direction (Σ: -0.20 mm vs 0.50 mm, P=0.008; σ: 0.45 mm vs 1.34 mm, P=0.002). And also the CBCT based SM was smaller than orthogonal kilovolt X-ray images based SM in AP direction (Σ: -1.39 mm vs 5.57 mm, P=0.003; σ: 0.00 mm vs 3.2 mm, P=0.003). Conclusions: Compared with kilovolt X-ray images, CBCT underestimate the setup error in the AP direction, but decreases setup residual error significantly.An image-guided radiotherapy and setup error assessment using kilovolt X-ray images for EB-PBI plans was feasible. (authors)

  4. Residual risk

    African Journals Online (AJOL)

    ing the residual risk of transmission of HIV by blood transfusion. An epidemiological approach assumed that all HIV infections detected serologically in first-time donors were pre-existing or prevalent infections, and that all infections detected in repeat blood donors were new or incident infections. During 1986 - 1987,0,012%.

  5. Random error in cardiovascular meta-analyses

    DEFF Research Database (Denmark)

    Albalawi, Zaina; McAlister, Finlay A; Thorlund, Kristian

    2013-01-01

    BACKGROUND: Cochrane reviews are viewed as the gold standard in meta-analyses given their efforts to identify and limit systematic error which could cause spurious conclusions. The potential for random error to cause spurious conclusions in meta-analyses is less well appreciated. METHODS: We exam...

  6. Understanding human management of automation errors

    Science.gov (United States)

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042

  7. Error calculations statistics in radioactive measurements

    International Nuclear Information System (INIS)

    Verdera, Silvia

    1994-01-01

    Basic approach and procedures frequently used in the practice of radioactive measurements.Statistical principles applied are part of Good radiopharmaceutical Practices and quality assurance.Concept of error, classification as systematic and random errors.Statistic fundamentals,probability theories, populations distributions, Bernoulli, Poisson,Gauss, t-test distribution,Ξ2 test, error propagation based on analysis of variance.Bibliography.z table,t-test table, Poisson index ,Ξ2 test

  8. Residual stress measurement in a metal microdevice by micro Raman spectroscopy

    International Nuclear Information System (INIS)

    Song, Chang; Du, Liqun; Qi, Leijie; Li, Yu; Li, Xiaojun; Li, Yuanqi

    2017-01-01

    Large residual stress induced during the electroforming process cannot be ignored to fabricate reliable metal microdevices. Accurate measurement is the basis for studying the residual stress. Influenced by the topological feature size of micron scale in the metal microdevice, residual stress in it can hardly be measured by common methods. In this manuscript, a methodology is proposed to measure the residual stress in the metal microdevice using micro Raman spectroscopy (MRS). To estimate the residual stress in metal materials, micron sized β -SiC particles were mixed in the electroforming solution for codeposition. First, the calculated expression relating the Raman shifts to the induced biaxial stress for β -SiC was derived based on the theory of phonon deformation potentials and Hooke’s law. Corresponding micro electroforming experiments were performed and the residual stress in Ni–SiC composite layer was both measured by x-ray diffraction (XRD) and MRS methods. Then, the validity of the MRS measurements was verified by comparing with the residual stress measured by XRD method. The reliability of the MRS method was further validated by the statistical student’s t -test. The MRS measurements were found to have no systematic error in comparison with the XRD measurements, which confirm that the residual stresses measured by the MRS method are reliable. Besides that, the MRS method, by which the residual stress in a micro inertial switch was measured, has been confirmed to be a convincing experiment tool for estimating the residual stress in metal microdevice with micron order topological feature size. (paper)

  9. Residual stress measurement in a metal microdevice by micro Raman spectroscopy

    Science.gov (United States)

    Song, Chang; Du, Liqun; Qi, Leijie; Li, Yu; Li, Xiaojun; Li, Yuanqi

    2017-10-01

    Large residual stress induced during the electroforming process cannot be ignored to fabricate reliable metal microdevices. Accurate measurement is the basis for studying the residual stress. Influenced by the topological feature size of micron scale in the metal microdevice, residual stress in it can hardly be measured by common methods. In this manuscript, a methodology is proposed to measure the residual stress in the metal microdevice using micro Raman spectroscopy (MRS). To estimate the residual stress in metal materials, micron sized β-SiC particles were mixed in the electroforming solution for codeposition. First, the calculated expression relating the Raman shifts to the induced biaxial stress for β-SiC was derived based on the theory of phonon deformation potentials and Hooke’s law. Corresponding micro electroforming experiments were performed and the residual stress in Ni-SiC composite layer was both measured by x-ray diffraction (XRD) and MRS methods. Then, the validity of the MRS measurements was verified by comparing with the residual stress measured by XRD method. The reliability of the MRS method was further validated by the statistical student’s t-test. The MRS measurements were found to have no systematic error in comparison with the XRD measurements, which confirm that the residual stresses measured by the MRS method are reliable. Besides that, the MRS method, by which the residual stress in a micro inertial switch was measured, has been confirmed to be a convincing experiment tool for estimating the residual stress in metal microdevice with micron order topological feature size.

  10. Residual basins

    International Nuclear Information System (INIS)

    D'Elboux, C.V.; Paiva, I.B.

    1980-01-01

    Exploration for uranium carried out over a major portion of the Rio Grande do Sul Shield has revealed a number of small residual basins developed along glacially eroded channels of pre-Permian age. Mineralization of uranium occurs in two distinct sedimentary units. The lower unit consists of rhythmites overlain by a sequence of black shales, siltstones and coal seams, while the upper one is dominated by sandstones of probable fluvial origin. (Author) [pt

  11. A Systematic Approach to Error Free Telemetry

    Science.gov (United States)

    2017-06-28

    interference problem created by utilizing two antennas to transmit the same telemetry signal [8]. This has also been referred to as the “two antenna...selection, commonly called Best Source Selection (BSS). Up until recently there was not a robust method to assess link quality, time- align each source...and then choose the best source on a bit-by-bit basis. The key here is not the time alignment or the bit-by-bit selection, but the accurate

  12. A systematic error in maximum likelihood fitting

    International Nuclear Information System (INIS)

    Bergmann, U.C.; Riisager, K.

    2002-01-01

    The maximum likelihood method is normally regarded as the safest method for parameter estimation. We show that this method will give a bias in the often occurring situation where a spectrum of counts is fitted with a theoretical function, unless the fit function is very simple. The bias can become significant when the spectrum contains less than about 100 counts or when the fit interval is too short

  13. [Errors in Peruvian medical journals references].

    Science.gov (United States)

    Huamaní, Charles; Pacheco-Romero, José

    2009-01-01

    References are fundamental in our studies; an adequate selection is asimportant as an adequate description. To determine the number of errors in a sample of references found in Peruvian medical journals. We reviewed 515 scientific papers references selected by systematic randomized sampling and corroborated reference information with the original document or its citation in Pubmed, LILACS or SciELO-Peru. We found errors in 47,6% (245) of the references, identifying 372 types of errors; the most frequent were errors in presentation style (120), authorship (100) and title (100), mainly due to spelling mistakes (91). References error percentage was high, varied and multiple. We suggest systematic revision of references in the editorial process as well as to extend the discussion on this theme. references, periodicals, research, bibliometrics.

  14. Immediate error correction process following sleep deprivation.

    Science.gov (United States)

    Hsieh, Shulan; Cheng, I-Chen; Tsai, Ling-Ling

    2007-06-01

    Previous studies have suggested that one night of sleep deprivation decreases frontal lobe metabolic activity, particularly in the anterior cingulated cortex (ACC), resulting in decreased performance in various executive function tasks. This study thus attempted to address whether sleep deprivation impaired the executive function of error detection and error correction. Sixteen young healthy college students (seven women, nine men, with ages ranging from 18 to 23 years) participated in this study. Participants performed a modified letter flanker task and were instructed to make immediate error corrections on detecting performance errors. Event-related potentials (ERPs) during the flanker task were obtained using a within-subject, repeated-measure design. The error negativity or error-related negativity (Ne/ERN) and the error positivity (Pe) seen immediately after errors were analyzed. The results show that the amplitude of the Ne/ERN was reduced significantly following sleep deprivation. Reduction also occurred for error trials with subsequent correction, indicating that sleep deprivation influenced error correction ability. This study further demonstrated that the impairment in immediate error correction following sleep deprivation was confined to specific stimulus types, with both Ne/ERN and behavioral correction rates being reduced only for trials in which flanker stimuli were incongruent with the target stimulus, while the response to the target was compatible with that of the flanker stimuli following sleep deprivation. The results thus warrant future systematic investigation of the interaction between stimulus type and error correction following sleep deprivation.

  15. RESIDUAL RISK ASSESSMENTS - RESIDUAL RISK ...

    Science.gov (United States)

    This source category previously subjected to a technology-based standard will be examined to determine if health or ecological risks are significant enough to warrant further regulation for Coke Ovens. These assesments utilize existing models and data bases to examine the multi-media and multi-pollutant impacts of air toxics emissions on human health and the environment. Details on the assessment process and methodologies can be found in EPA's Residual Risk Report to Congress issued in March of 1999 (see web site). To assess the health risks imposed by air toxics emissions from Coke Ovens to determine if control technology standards previously established are adequately protecting public health.

  16. Learning from prescribing errors

    OpenAIRE

    Dean, B

    2002-01-01

    

 The importance of learning from medical error has recently received increasing emphasis. This paper focuses on prescribing errors and argues that, while learning from prescribing errors is a laudable goal, there are currently barriers that can prevent this occurring. Learning from errors can take place on an individual level, at a team level, and across an organisation. Barriers to learning from prescribing errors include the non-discovery of many prescribing errors, lack of feedback to th...

  17. Residual nilpotence and residual solubility of groups

    International Nuclear Information System (INIS)

    Mikhailov, R V

    2005-01-01

    The properties of the residual nilpotence and the residual solubility of groups are studied. The main objects under investigation are the class of residually nilpotent groups such that each central extension of these groups is also residually nilpotent and the class of residually soluble groups such that each Abelian extension of these groups is residually soluble. Various examples of groups not belonging to these classes are constructed by homological methods and methods of the theory of modules over group rings. Several applications of the theory under consideration are presented and problems concerning the residual nilpotence of one-relator groups are considered.

  18. Comments on "A New Random-Error-Correction Code"

    DEFF Research Database (Denmark)

    Paaske, Erik

    1979-01-01

    This correspondence investigates the error propagation properties of six different systems using a (12, 6) systematic double-error-correcting convolutional encoder and a one-step majority-logic feedback decoder. For the generally accepted assumption that channel errors are much more likely to occur...

  19. Analysis of errors in forensic science

    Directory of Open Access Journals (Sweden)

    Mingxiao Du

    2017-01-01

    Full Text Available Reliability of expert testimony is one of the foundations of judicial justice. Both expert bias and scientific errors affect the reliability of expert opinion, which in turn affects the trustworthiness of the findings of fact in legal proceedings. Expert bias can be eliminated by replacing experts; however, it may be more difficult to eliminate scientific errors. From the perspective of statistics, errors in operation of forensic science include systematic errors, random errors, and gross errors. In general, process repetition and abiding by the standard ISO/IEC:17025: 2005, general requirements for the competence of testing and calibration laboratories, during operation are common measures used to reduce errors that originate from experts and equipment, respectively. For example, to reduce gross errors, the laboratory can ensure that a test is repeated several times by different experts. In applying for forensic principles and methods, the Federal Rules of Evidence 702 mandate that judges consider factors such as peer review, to ensure the reliability of the expert testimony. As the scientific principles and methods may not undergo professional review by specialists in a certain field, peer review serves as an exclusive standard. This study also examines two types of statistical errors. As false-positive errors involve a higher possibility of an unfair decision-making, they should receive more attention than false-negative errors.

  20. Cover crop residue management for optimizing weed control

    NARCIS (Netherlands)

    Kruidhof, H.M.; Bastiaans, L.; Kropff, M.J.

    2009-01-01

    Although residue management seems a key factor in residue-mediated weed suppression, very few studies have systematically compared the influence of different residue management strategies on the establishment of crop and weed species. We evaluated the effect of several methods of pre-treatment and

  1. Two-dimensional errors

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements

  2. Part two: Error propagation

    International Nuclear Information System (INIS)

    Picard, R.R.

    1989-01-01

    Topics covered in this chapter include a discussion of exact results as related to nuclear materials management and accounting in nuclear facilities; propagation of error for a single measured value; propagation of error for several measured values; error propagation for materials balances; and an application of error propagation to an example of uranium hexafluoride conversion process

  3. Learning from Errors

    OpenAIRE

    Martínez-Legaz, Juan Enrique; Soubeyran, Antoine

    2003-01-01

    We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.

  4. Error Resilient Video Compression Using Behavior Models

    Directory of Open Access Journals (Sweden)

    Jacco R. Taal

    2004-03-01

    Full Text Available Wireless and Internet video applications are inherently subjected to bit errors and packet errors, respectively. This is especially so if constraints on the end-to-end compression and transmission latencies are imposed. Therefore, it is necessary to develop methods to optimize the video compression parameters and the rate allocation of these applications that take into account residual channel bit errors. In this paper, we study the behavior of a predictive (interframe video encoder and model the encoders behavior using only the statistics of the original input data and of the underlying channel prone to bit errors. The resulting data-driven behavior models are then used to carry out group-of-pictures partitioning and to control the rate of the video encoder in such a way that the overall quality of the decoded video with compression and channel errors is optimized.

  5. Analysis of residual toluene in food packaging via headspace extraction method using gas chromatography

    International Nuclear Information System (INIS)

    Lim, Ying Chin; Mohd Marsin Sanagi

    2008-01-01

    Polymeric materials are used in many food contact applications as packaging material. The presence of residual toluene in this food packaging material can migrate into food and thus affect the quality of food. In this study, a manual headspace analysis was successfully designed and developed. The determination of residual toluene was carried out with standard addition method and multiple headspace extraction, MHE) method using gas chromatography-flame ionization detector, GC-FID). Identification of toluene was performed by comparison of its retention time with standard toluene and GC-MS. It was found that the suitable heating temperature was 180 degree Celsius with an optimum heating time of 10 minutes. The study also found that the concentration of residual toluene in multicolored sample was higher compared to mono colored sample whereas residual toluene in sample analyzed using standard addition method was higher compared to MHE method. However, comparison with the results obtained from De Paris laboratory, France found that MHE method gave higher accuracy for sample with low analyte concentration. On the other hand, lower accuracy was obtained for sample with high concentration of residual toluene due to systematic errors. Comparison between determination methods showed that MHE method is more precise compared to standard addition method. (author)

  6. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  7. Learning from Errors.

    Science.gov (United States)

    Metcalfe, Janet

    2017-01-03

    Although error avoidance during learning appears to be the rule in American classrooms, laboratory studies suggest that it may be a counterproductive strategy, at least for neurologically typical students. Experimental investigations indicate that errorful learning followed by corrective feedback is beneficial to learning. Interestingly, the beneficial effects are particularly salient when individuals strongly believe that their error is correct: Errors committed with high confidence are corrected more readily than low-confidence errors. Corrective feedback, including analysis of the reasoning leading up to the mistake, is crucial. Aside from the direct benefit to learners, teachers gain valuable information from errors, and error tolerance encourages students' active, exploratory, generative engagement. If the goal is optimal performance in high-stakes situations, it may be worthwhile to allow and even encourage students to commit and correct errors while they are in low-stakes learning situations rather than to assiduously avoid errors at all costs.

  8. Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?

    Science.gov (United States)

    Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan

    2013-01-01

    The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.

  9. Learner Corpora without Error Tagging

    Directory of Open Access Journals (Sweden)

    Rastelli, Stefano

    2009-01-01

    Full Text Available The article explores the possibility of adopting a form-to-function perspective when annotating learner corpora in order to get deeper insights about systematic features of interlanguage. A split between forms and functions (or categories is desirable in order to avoid the "comparative fallacy" and because – especially in basic varieties – forms may precede functions (e.g., what resembles to a "noun" might have a different function or a function may show up in unexpected forms. In the computer-aided error analysis tradition, all items produced by learners are traced to a grid of error tags which is based on the categories of the target language. Differently, we believe it is possible to record and make retrievable both words and sequence of characters independently from their functional-grammatical label in the target language. For this purpose at the University of Pavia we adapted a probabilistic POS tagger designed for L1 on L2 data. Despite the criticism that this operation can raise, we found that it is better to work with "virtual categories" rather than with errors. The article outlines the theoretical background of the project and shows some examples in which some potential of SLA-oriented (non error-based tagging will be possibly made clearer.

  10. Notes on human error analysis and prediction

    International Nuclear Information System (INIS)

    Rasmussen, J.

    1978-11-01

    The notes comprise an introductory discussion of the role of human error analysis and prediction in industrial risk analysis. Following this introduction, different classes of human errors and role in industrial systems are mentioned. Problems related to the prediction of human behaviour in reliability and safety analysis are formulated and ''criteria for analyzability'' which must be met by industrial systems so that a systematic analysis can be performed are suggested. The appendices contain illustrative case stories and a review of human error reports for the task of equipment calibration and testing as found in the US Licensee Event Reports. (author)

  11. Heuristics and Cognitive Error in Medical Imaging.

    Science.gov (United States)

    Itri, Jason N; Patel, Sohil H

    2018-05-01

    The field of cognitive science has provided important insights into mental processes underlying the interpretation of imaging examinations. Despite these insights, diagnostic error remains a major obstacle in the goal to improve quality in radiology. In this article, we describe several types of cognitive bias that lead to diagnostic errors in imaging and discuss approaches to mitigate cognitive biases and diagnostic error. Radiologists rely on heuristic principles to reduce complex tasks of assessing probabilities and predicting values into simpler judgmental operations. These mental shortcuts allow rapid problem solving based on assumptions and past experiences. Heuristics used in the interpretation of imaging studies are generally helpful but can sometimes result in cognitive biases that lead to significant errors. An understanding of the causes of cognitive biases can lead to the development of educational content and systematic improvements that mitigate errors and improve the quality of care provided by radiologists.

  12. Field error lottery

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  13. Exploring cosmic origins with CORE: Mitigation of systematic effects

    Science.gov (United States)

    Natoli, P.; Ashdown, M.; Banerji, R.; Borrill, J.; Buzzelli, A.; de Gasperis, G.; Delabrouille, J.; Hivon, E.; Molinari, D.; Patanchon, G.; Polastri, L.; Tomasi, M.; Bouchet, F. R.; Henrot-Versillé, S.; Hoang, D. T.; Keskitalo, R.; Kiiveri, K.; Kisner, T.; Lindholm, V.; McCarthy, D.; Piacentini, F.; Perdereau, O.; Polenta, G.; Tristram, M.; Achucarro, A.; Ade, P.; Allison, R.; Baccigalupi, C.; Ballardini, M.; Banday, A. J.; Bartlett, J.; Bartolo, N.; Basak, S.; Baumann, D.; Bersanelli, M.; Bonaldi, A.; Bonato, M.; Boulanger, F.; Brinckmann, T.; Bucher, M.; Burigana, C.; Cai, Z.-Y.; Calvo, M.; Carvalho, C.-S.; Castellano, M. G.; Challinor, A.; Chluba, J.; Clesse, S.; Colantoni, I.; Coppolecchia, A.; Crook, M.; D'Alessandro, G.; de Bernardis, P.; De Zotti, G.; Di Valentino, E.; Diego, J.-M.; Errard, J.; Feeney, S.; Fernandez-Cobos, R.; Finelli, F.; Forastieri, F.; Galli, S.; Genova-Santos, R.; Gerbino, M.; González-Nuevo, J.; Grandis, S.; Greenslade, J.; Gruppuso, A.; Hagstotz, S.; Hanany, S.; Handley, W.; Hernandez-Monteagudo, C.; Hervías-Caimapo, C.; Hills, M.; Keihänen, E.; Kitching, T.; Kunz, M.; Kurki-Suonio, H.; Lamagna, L.; Lasenby, A.; Lattanzi, M.; Lesgourgues, J.; Lewis, A.; Liguori, M.; López-Caniego, M.; Luzzi, G.; Maffei, B.; Mandolesi, N.; Martinez-González, E.; Martins, C. J. A. P.; Masi, S.; Matarrese, S.; Melchiorri, A.; Melin, J.-B.; Migliaccio, M.; Monfardini, A.; Negrello, M.; Notari, A.; Pagano, L.; Paiella, A.; Paoletti, D.; Piat, M.; Pisano, G.; Pollo, A.; Poulin, V.; Quartin, M.; Remazeilles, M.; Roman, M.; Rossi, G.; Rubino-Martin, J.-A.; Salvati, L.; Signorelli, G.; Tartari, A.; Tramonte, D.; Trappe, N.; Trombetti, T.; Tucker, C.; Valiviita, J.; Van de Weijgaert, R.; van Tent, B.; Vennin, V.; Vielva, P.; Vittorio, N.; Wallis, C.; Young, K.; Zannoni, M.

    2018-04-01

    We present an analysis of the main systematic effects that could impact the measurement of CMB polarization with the proposed CORE space mission. We employ timeline-to-map simulations to verify that the CORE instrumental set-up and scanning strategy allow us to measure sky polarization to a level of accuracy adequate to the mission science goals. We also show how the CORE observations can be processed to mitigate the level of contamination by potentially worrying systematics, including intensity-to-polarization leakage due to bandpass mismatch, asymmetric main beams, pointing errors and correlated noise. We use analysis techniques that are well validated on data from current missions such as Planck to demonstrate how the residual contamination of the measurements by these effects can be brought to a level low enough not to hamper the scientific capability of the mission, nor significantly increase the overall error budget. We also present a prototype of the CORE photometric calibration pipeline, based on that used for Planck, and discuss its robustness to systematics, showing how CORE can achieve its calibration requirements. While a fine-grained assessment of the impact of systematics requires a level of knowledge of the system that can only be achieved in a future study phase, the analysis presented here strongly suggests that the main areas of concern for the CORE mission can be addressed using existing knowledge, techniques and algorithms.

  14. Inborn errors of metabolism

    Science.gov (United States)

    Metabolism - inborn errors of ... Bodamer OA. Approach to inborn errors of metabolism. In: Goldman L, Schafer AI, eds. Goldman-Cecil Medicine . 25th ed. Philadelphia, PA: Elsevier Saunders; 2016:chap 205. Rezvani I, Rezvani GA. An ...

  15. A new stochastic model considering satellite clock interpolation errors in precise point positioning

    Science.gov (United States)

    Wang, Shengli; Yang, Fanlin; Gao, Wang; Yan, Lizi; Ge, Yulong

    2018-03-01

    Precise clock products are typically interpolated based on the sampling interval of the observational data when they are used for in precise point positioning. However, due to the occurrence of white noise in atomic clocks, a residual component of such noise will inevitable reside within the observations when clock errors are interpolated, and such noise will affect the resolution of the positioning results. In this paper, which is based on a twenty-one-week analysis of the atomic clock noise characteristics of numerous satellites, a new stochastic observation model that considers satellite clock interpolation errors is proposed. First, the systematic error of each satellite in the IGR clock product was extracted using a wavelet de-noising method to obtain the empirical characteristics of atomic clock noise within each clock product. Then, based on those empirical characteristics, a stochastic observation model was structured that considered the satellite clock interpolation errors. Subsequently, the IGR and IGS clock products at different time intervals were used for experimental validation. A verification using 179 stations worldwide from the IGS showed that, compared with the conventional model, the convergence times using the stochastic model proposed in this study were respectively shortened by 4.8% and 4.0% when the IGR and IGS 300-s-interval clock products were used and by 19.1% and 19.4% when the 900-s-interval clock products were used. Furthermore, the disturbances during the initial phase of the calculation were also effectively improved.

  16. Drug Errors in Anaesthesiology

    Directory of Open Access Journals (Sweden)

    Rajnish Kumar Jain

    2009-01-01

    Full Text Available Medication errors are a leading cause of morbidity and mortality in hospitalized patients. The incidence of these drug errors during anaesthesia is not certain. They impose a considerable financial burden to health care systems apart from the patient losses. Common causes of these errors and their prevention is discussed.

  17. ATC operational error analysis.

    Science.gov (United States)

    1972-01-01

    The primary causes of operational errors are discussed and the effects of these errors on an ATC system's performance are described. No attempt is made to specify possible error models for the spectrum of blunders that can occur although previous res...

  18. Spatial measurement errors in the field of spatial epidemiology.

    Science.gov (United States)

    Zhang, Zhijie; Manjourides, Justin; Cohen, Ted; Hu, Yi; Jiang, Qingwu

    2016-07-01

    Spatial epidemiology has been aided by advances in geographic information systems, remote sensing, global positioning systems and the development of new statistical methodologies specifically designed for such data. Given the growing popularity of these studies, we sought to review and analyze the types of spatial measurement errors commonly encountered during spatial epidemiological analysis of spatial data. Google Scholar, Medline, and Scopus databases were searched using a broad set of terms for papers indexed by a term indicating location (space or geography or location or position) and measurement error (measurement error or measurement inaccuracy or misclassification or uncertainty): we reviewed all papers appearing before December 20, 2014. These papers and their citations were reviewed to identify the relevance to our review. We were able to define and classify spatial measurement errors into four groups: (1) pure spatial location measurement errors, including both non-instrumental errors (multiple addresses, geocoding errors, outcome aggregations, and covariate aggregation) and instrumental errors; (2) location-based outcome measurement error (purely outcome measurement errors and missing outcome measurements); (3) location-based covariate measurement errors (address proxies); and (4) Covariate-Outcome spatial misaligned measurement errors. We propose how these four classes of errors can be unified within an integrated theoretical model and possible solutions were discussed. Spatial measurement errors are ubiquitous threat to the validity of spatial epidemiological studies. We propose a systematic framework for understanding the various mechanisms which generate spatial measurement errors and present practical examples of such errors.

  19. Prescribing Errors Involving Medication Dosage Forms

    Science.gov (United States)

    Lesar, Timothy S

    2002-01-01

    CONTEXT Prescribing errors involving medication dose formulations have been reported to occur frequently in hospitals. No systematic evaluations of the characteristics of errors related to medication dosage formulation have been performed. OBJECTIVE To quantify the characteristics, frequency, and potential adverse patient effects of prescribing errors involving medication dosage forms . DESIGN Evaluation of all detected medication prescribing errors involving or related to medication dosage forms in a 631-bed tertiary care teaching hospital. MAIN OUTCOME MEASURES Type, frequency, and potential for adverse effects of prescribing errors involving or related to medication dosage forms. RESULTS A total of 1,115 clinically significant prescribing errors involving medication dosage forms were detected during the 60-month study period. The annual number of detected errors increased throughout the study period. Detailed analysis of the 402 errors detected during the last 16 months of the study demonstrated the most common errors to be: failure to specify controlled release formulation (total of 280 cases; 69.7%) both when prescribing using the brand name (148 cases; 36.8%) and when prescribing using the generic name (132 cases; 32.8%); and prescribing controlled delivery formulations to be administered per tube (48 cases; 11.9%). The potential for adverse patient outcome was rated as potentially “fatal or severe” in 3 cases (0.7%), and “serious” in 49 cases (12.2%). Errors most commonly involved cardiovascular agents (208 cases; 51.7%). CONCLUSIONS Hospitalized patients are at risk for adverse outcomes due to prescribing errors related to inappropriate use of medication dosage forms. This information should be considered in the development of strategies to prevent adverse patient outcomes resulting from such errors. PMID:12213138

  20. Error bounds from extra precise iterative refinement

    Energy Technology Data Exchange (ETDEWEB)

    Demmel, James; Hida, Yozo; Kahan, William; Li, Xiaoye S.; Mukherjee, Soni; Riedy, E. Jason

    2005-02-07

    We present the design and testing of an algorithm for iterative refinement of the solution of linear equations, where the residual is computed with extra precision. This algorithm was originally proposed in the 1960s [6, 22] as a means to compute very accurate solutions to all but the most ill-conditioned linear systems of equations. However two obstacles have until now prevented its adoption in standard subroutine libraries like LAPACK: (1) There was no standard way to access the higher precision arithmetic needed to compute residuals, and (2) it was unclear how to compute a reliable error bound for the computed solution. The completion of the new BLAS Technical Forum Standard [5] has recently removed the first obstacle. To overcome the second obstacle, we show how a single application of iterative refinement can be used to compute an error bound in any norm at small cost, and use this to compute both an error bound in the usual infinity norm, and a componentwise relative error bound. We report extensive test results on over 6.2 million matrices of dimension 5, 10, 100, and 1000. As long as a normwise (resp. componentwise) condition number computed by the algorithm is less than 1/max{l_brace}10,{radical}n{r_brace} {var_epsilon}{sub w}, the computed normwise (resp. componentwise) error bound is at most 2 max{l_brace}10,{radical}n{r_brace} {center_dot} {var_epsilon}{sub w}, and indeed bounds the true error. Here, n is the matrix dimension and w is single precision roundoff error. For worse conditioned problems, we get similarly small correct error bounds in over 89.4% of cases.

  1. Novel feature for catalytic protein residues reflecting interactions with other residues.

    Directory of Open Access Journals (Sweden)

    Yizhou Li

    Full Text Available Owing to their potential for systematic analysis, complex networks have been widely used in proteomics. Representing a protein structure as a topology network provides novel insight into understanding protein folding mechanisms, stability and function. Here, we develop a new feature to reveal correlations between residues using a protein structure network. In an original attempt to quantify the effects of several key residues on catalytic residues, a power function was used to model interactions between residues. The results indicate that focusing on a few residues is a feasible approach to identifying catalytic residues. The spatial environment surrounding a catalytic residue was analyzed in a layered manner. We present evidence that correlation between residues is related to their distance apart most environmental parameters of the outer layer make a smaller contribution to prediction and ii catalytic residues tend to be located near key positions in enzyme folds. Feature analysis revealed satisfactory performance for our features, which were combined with several conventional features in a prediction model for catalytic residues using a comprehensive data set from the Catalytic Site Atlas. Values of 88.6 for sensitivity and 88.4 for specificity were obtained by 10-fold cross-validation. These results suggest that these features reveal the mutual dependence of residues and are promising for further study of structure-function relationship.

  2. Aircraft system modeling error and control error

    Science.gov (United States)

    Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)

    2012-01-01

    A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.

  3. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    International Nuclear Information System (INIS)

    Matsuo, Yukinori; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro

    2016-01-01

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.

  4. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy.

    Science.gov (United States)

    Matsuo, Yukinori; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro

    2016-09-01

    The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Balanced data according to the one-factor random effect model were assumed. Analysis-of-variance (anova)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The anova-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.

  5. Leptogenesis and residual CP symmetry

    International Nuclear Information System (INIS)

    Chen, Peng; Ding, Gui-Jun; King, Stephen F.

    2016-01-01

    We discuss flavour dependent leptogenesis in the framework of lepton flavour models based on discrete flavour and CP symmetries applied to the type-I seesaw model. Working in the flavour basis, we analyse the case of two general residual CP symmetries in the neutrino sector, which corresponds to all possible semi-direct models based on a preserved Z 2 in the neutrino sector, together with a CP symmetry, which constrains the PMNS matrix up to a single free parameter which may be fixed by the reactor angle. We systematically study and classify this case for all possible residual CP symmetries, and show that the R-matrix is tightly constrained up to a single free parameter, with only certain forms being consistent with successful leptogenesis, leading to possible connections between leptogenesis and PMNS parameters. The formalism is completely general in the sense that the two residual CP symmetries could result from any high energy discrete flavour theory which respects any CP symmetry. As a simple example, we apply the formalism to a high energy S 4 flavour symmetry with a generalized CP symmetry, broken to two residual CP symmetries in the neutrino sector, recovering familiar results for PMNS predictions, together with new results for flavour dependent leptogenesis.

  6. Exploring Measurement Error with Cookies: A Real and Virtual Approach via Interactive Excel

    Science.gov (United States)

    Sinex, Scott A; Gage, Barbara A.; Beck, Peggy J.

    2007-01-01

    A simple, guided-inquiry investigation using stacked sandwich cookies is employed to develop a simple linear mathematical model and to explore measurement error by incorporating errors as part of the investigation. Both random and systematic errors are presented. The model and errors are then investigated further by engaging with an interactive…

  7. The Acquisition of Subject-Verb Agreement in Written French: From Novices to Experts' Errors.

    Science.gov (United States)

    Fayol, Michel; Largy, Pierre; Hupet, Michel

    1999-01-01

    Aims at demonstrating the gradual automatization of subject-verb agreement operation in young writers by examining developmental changes in the occurrence of agreement errors. Finds that subjects' performance moved from systematic errors to attraction errors through an intermediate phase. Concludes that attraction errors are a byproduct of the…

  8. Quantification and handling of sampling errors in instrumental measurements: a case study

    DEFF Research Database (Denmark)

    Andersen, Charlotte Møller; Bro, R.

    2004-01-01

    Instrumental measurements are often used to represent a whole object even though only a small part of the object is actually measured. This can introduce an error due to the inhomogeneity of the product. Together with other errors resulting from the measuring process, such errors may have a serious...... impact on the results when the instrumental measurements are used for multivariate regression and prediction. This paper gives examples of how errors influencing the predictions obtained by a multivariate regression model can be quantified and handled. Only random errors are considered here, while...... in certain situations, the effect of systematic errors is also considerable. The relevant errors contributing to the prediction error are: error in instrumental measurements (x-error), error in reference measurements (y-error), error in the estimated calibration model (regression coefficient error) and model...

  9. Error detection method

    Science.gov (United States)

    Olson, Eric J.

    2013-06-11

    An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).

  10. Error tracking in a clinical biochemistry laboratory

    DEFF Research Database (Denmark)

    Szecsi, Pal Bela; Ødum, Lars

    2009-01-01

    BACKGROUND: We report our results for the systematic recording of all errors in a standard clinical laboratory over a 1-year period. METHODS: Recording was performed using a commercial database program. All individuals in the laboratory were allowed to report errors. The testing processes were......-technicians collected blood samples. CONCLUSIONS: Each clinical laboratory should record errors in a structured manner. A relation database is a useful tool for the recording and extraction of data, as the database can be structured to reflect the workflow at each individual laboratory....... classified according to function, and errors were classified as pre-analytical, analytical, post-analytical, or service-related, and then further divided into descriptive subgroups. Samples were taken from hospital wards (38.6%), outpatient clinics (25.7%), general practitioners (29.4%), and other hospitals...

  11. Errors in Neonatology

    OpenAIRE

    Antonio Boldrini; Rosa T. Scaramuzzo; Armando Cuttano

    2013-01-01

    Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy). Results: In Neonatology the main err...

  12. Correction for quadrature errors

    DEFF Research Database (Denmark)

    Netterstrøm, A.; Christensen, Erik Lintz

    1994-01-01

    In high bandwidth radar systems it is necessary to use quadrature devices to convert the signal to/from baseband. Practical problems make it difficult to implement a perfect quadrature system. Channel imbalance and quadrature phase errors in the transmitter and the receiver result in error signals......, which appear as self-clutter in the radar image. When digital techniques are used for generation and processing or the radar signal it is possible to reduce these error signals. In the paper the quadrature devices are analyzed, and two different error compensation methods are considered. The practical...

  13. Biologic and physical fractionation effects of random geometric errors

    NARCIS (Netherlands)

    van Herk, Marcel; Witte, Marnix; van der Geer, Joris; Schneider, Christoph; Lebesque, Joos V.

    2003-01-01

    PURPOSE: We are developing a system to model the effect of random and systematic geometric errors on radiotherapy delivery. The purpose of this study was to investigate biologic and physical fractionation effects of random geometric errors and respiration motion and compare the resulting dose

  14. Prioritising the prevention of medication handling errors.

    Science.gov (United States)

    Bertsche, Thilo; Niemann, Dorothee; Mayer, Yvonne; Ingram, Katrin; Hoppe-Tichy, Torsten; Haefeli, Walter E

    2008-12-01

    Medication errors are frequent in a hospital setting and often caused by inappropriate drug handling. Systematic strategies for their prevention however are still lacking. We developed and applied a classification model to categorise medication handling errors and defined the urgency of correction on the basis of these findings. Nurses on medical wards (including intensive and intermediate care units) of a 1,680-bed teaching hospital. In a prospective observational study we evaluated the prevalence of 20 predefined medication handling errors on the ward. In a concurrent questionnaire survey, we assessed the knowledge of the nurses on medication handling. The severity of errors observed in individual areas was scored considering prevalence, potential risk of an error, and the involved drug. These scores and the prevalence of corresponding knowledge deficits were used to define the urgency of preventive strategies according to a four-field decision matrix. Prevalence and potential risk of medication handling errors, corresponding knowledge deficits in nurses committing the errors, and priority of quality improvement. In 1,376 observed processes 833 medication handling errors were detected. Errors concerning preparation (mean 0.88 errors per observed process [95% CI: 0.81-0.96], N = 645) were more frequent than administration errors (0.36 [0.32-0.41], N = 701, P drugs (1.10 [1.00-1.19], N = 492) were more often involved in errors than enteral drugs (0.32 [0.28-0.36], N = 794, P drugs, 81.6% by uncomplicated drugs, and 6.9% by nutritional supplements or diluents without active ingredient. According to the decision matrix that also considered knowledge deficits two error types concerning enteral drugs (flaws in light protection and prescribing information) were given maximum priority for quality improvement. For parenteral drugs five errors (incompatibilities, flaws in hygiene, duration of administration, check for visible abnormalities, and again prescribing

  15. Modeling the North American vertical datum of 1988 errors in the conterminous United States

    Science.gov (United States)

    Li, X.

    2018-02-01

    A large systematic difference (ranging from -20 cm to +130 cm) was found between NAVD 88 (North AmericanVertical Datum of 1988) and the pure gravimetric geoid models. This difference not only makes it very difficult to augment the local geoid model by directly using the vast NAVD 88 network with state-of-the-art technologies recently developed in geodesy, but also limits the ability of researchers to effectively demonstrate the geoid model improvements on the NAVD 88 network. Here, both conventional regression analyses based on various predefined basis functions such as polynomials, B-splines, and Legendre functions and the Latent Variable Analysis (LVA) such as the Factor Analysis (FA) are used to analyze the systematic difference. Besides giving a mathematical model, the regression results do not reveal a great deal about the physical reasons that caused the large differences in NAVD 88, which may be of interest to various researchers. Furthermore, there is still a significant amount of no-Gaussian signals left in the residuals of the conventional regression models. On the other side, the FA method not only provides a better not of the data, but also offers possible explanations of the error sources. Without requiring extra hypothesis tests on the model coefficients, the results from FA are more efficient in terms of capturing the systematic difference. Furthermore, without using a covariance model, a novel interpolating method based on the relationship between the loading matrix and the factor scores is developed for predictive purposes. The prediction error analysis shows that about 3-7 cm precision is expected in NAVD 88 after removing the systematic difference.

  16. Modeling the North American vertical datum of 1988 errors in the conterminous United States

    Directory of Open Access Journals (Sweden)

    Li X.

    2018-01-01

    Full Text Available A large systematic difference (ranging from −20 cm to +130 cm was found between NAVD 88 (North AmericanVertical Datum of 1988 and the pure gravimetric geoid models. This difference not only makes it very difficult to augment the local geoid model by directly using the vast NAVD 88 network with state-of-the-art technologies recently developed in geodesy, but also limits the ability of researchers to effectively demonstrate the geoid model improvements on the NAVD 88 network. Here, both conventional regression analyses based on various predefined basis functions such as polynomials, B-splines, and Legendre functions and the Latent Variable Analysis (LVA such as the Factor Analysis (FA are used to analyze the systematic difference. Besides giving a mathematical model, the regression results do not reveal a great deal about the physical reasons that caused the large differences in NAVD 88, which may be of interest to various researchers. Furthermore, there is still a significant amount of no-Gaussian signals left in the residuals of the conventional regression models. On the other side, the FA method not only provides a better not of the data, but also offers possible explanations of the error sources. Without requiring extra hypothesis tests on the model coefficients, the results from FA are more efficient in terms of capturing the systematic difference. Furthermore, without using a covariance model, a novel interpolating method based on the relationship between the loading matrix and the factor scores is developed for predictive purposes. The prediction error analysis shows that about 3-7 cm precision is expected in NAVD 88 after removing the systematic difference.

  17. On the Correspondence between Mean Forecast Errors and Climate Errors in CMIP5 Models

    Energy Technology Data Exchange (ETDEWEB)

    Ma, H. -Y.; Xie, S.; Klein, S. A.; Williams, K. D.; Boyle, J. S.; Bony, S.; Douville, H.; Fermepin, S.; Medeiros, B.; Tyteca, S.; Watanabe, M.; Williamson, D.

    2014-02-01

    The present study examines the correspondence between short- and long-term systematic errors in five atmospheric models by comparing the 16 five-day hindcast ensembles from the Transpose Atmospheric Model Intercomparison Project II (Transpose-AMIP II) for July–August 2009 (short term) to the climate simulations from phase 5 of the Coupled Model Intercomparison Project (CMIP5) and AMIP for the June–August mean conditions of the years of 1979–2008 (long term). Because the short-term hindcasts were conducted with identical climate models used in the CMIP5/AMIP simulations, one can diagnose over what time scale systematic errors in these climate simulations develop, thus yielding insights into their origin through a seamless modeling approach. The analysis suggests that most systematic errors of precipitation, clouds, and radiation processes in the long-term climate runs are present by day 5 in ensemble average hindcasts in all models. Errors typically saturate after few days of hindcasts with amplitudes comparable to the climate errors, and the impacts of initial conditions on the simulated ensemble mean errors are relatively small. This robust bias correspondence suggests that these systematic errors across different models likely are initiated by model parameterizations since the atmospheric large-scale states remain close to observations in the first 2–3 days. However, biases associated with model physics can have impacts on the large-scale states by day 5, such as zonal winds, 2-m temperature, and sea level pressure, and the analysis further indicates a good correspondence between short- and long-term biases for these large-scale states. Therefore, improving individual model parameterizations in the hindcast mode could lead to the improvement of most climate models in simulating their climate mean state and potentially their future projections.

  18. NLO error propagation exercise: statistical results

    International Nuclear Information System (INIS)

    Pack, D.J.; Downing, D.J.

    1985-09-01

    Error propagation is the extrapolation and cumulation of uncertainty (variance) above total amounts of special nuclear material, for example, uranium or 235 U, that are present in a defined location at a given time. The uncertainty results from the inevitable inexactness of individual measurements of weight, uranium concentration, 235 U enrichment, etc. The extrapolated and cumulated uncertainty leads directly to quantified limits of error on inventory differences (LEIDs) for such material. The NLO error propagation exercise was planned as a field demonstration of the utilization of statistical error propagation methodology at the Feed Materials Production Center in Fernald, Ohio from April 1 to July 1, 1983 in a single material balance area formed specially for the exercise. Major elements of the error propagation methodology were: variance approximation by Taylor Series expansion; variance cumulation by uncorrelated primary error sources as suggested by Jaech; random effects ANOVA model estimation of variance effects (systematic error); provision for inclusion of process variance in addition to measurement variance; and exclusion of static material. The methodology was applied to material balance area transactions from the indicated time period through a FORTRAN computer code developed specifically for this purpose on the NLO HP-3000 computer. This paper contains a complete description of the error propagation methodology and a full summary of the numerical results of applying the methodlogy in the field demonstration. The error propagation LEIDs did encompass the actual uranium and 235 U inventory differences. Further, one can see that error propagation actually provides guidance for reducing inventory differences and LEIDs in future time periods

  19. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  20. Medical error and disclosure.

    Science.gov (United States)

    White, Andrew A; Gallagher, Thomas H

    2013-01-01

    Errors occur commonly in healthcare and can cause significant harm to patients. Most errors arise from a combination of individual, system, and communication failures. Neurologists may be involved in harmful errors in any practice setting and should familiarize themselves with tools to prevent, report, and examine errors. Although physicians, patients, and ethicists endorse candid disclosure of harmful medical errors to patients, many physicians express uncertainty about how to approach these conversations. A growing body of research indicates physicians often fail to meet patient expectations for timely and open disclosure. Patients desire information about the error, an apology, and a plan for preventing recurrence of the error. To meet these expectations, physicians should participate in event investigations and plan thoroughly for each disclosure conversation, preferably with a disclosure coach. Physicians should also anticipate and attend to the ongoing medical and emotional needs of the patient. A cultural change towards greater transparency following medical errors is in motion. Substantial progress is still required, but neurologists can further this movement by promoting policies and environments conducive to open reporting, respectful disclosure to patients, and support for the healthcare workers involved. © 2013 Elsevier B.V. All rights reserved.

  1. Learning from Errors

    Science.gov (United States)

    Metcalfe, Janet

    2017-01-01

    Although error avoidance during learning appears to be the rule in American classrooms, laboratory studies suggest that it may be a counterproductive strategy, at least for neurologically typical students. Experimental investigations indicate that errorful learning followed by corrective feedback is beneficial to learning. Interestingly, the…

  2. Force Reproduction Error Depends on Force Level, whereas the Position Reproduction Error Does Not

    NARCIS (Netherlands)

    Onneweer, B.; Mugge, W.; Schouten, Alfred Christiaan

    2016-01-01

    When reproducing a previously perceived force or position humans make systematic errors. This study determined the effect of force level on force and position reproduction, when both target and reproduction force are self-generated with the same hand. Subjects performed force reproduction tasks at

  3. Status update of the effort to correct the SDO/HMI systemmatic errors in Doppler velocity and derived data products

    Science.gov (United States)

    Scherrer, Philip H.

    2017-08-01

    This poster provides an update of the status of the efforts to understand and correct the leakage of the SDO orbit velocity into most HMI data products. The following is extracted from the abstract for the similar topic presented at the 2016 SPD meeting: “The Helioseismic and Magnetic Imager (HMI) instrument on the Solar Dynamics Observatory (SDO) measures sets of filtergrams which are converted into velocity and magnetic field maps. In addition to solar photospheric motions the velocity measurements include a direct component from the line-of-sight component of the SDO orbit. Since the magnetic field is computed as the difference between the velocity measured in left and right circular polarization, the orbit velocity is canceled only if the velocity is properly calibrated. When the orbit velocity is subtracted the remaining "solar" velocity shows a residual signal which is equal to about 2% of the c. +- 3000 m/s orbit velocity in a nearly linear relationship. This implies an error in our knowledge of some of the details of as-built filter components. This systematic error is the source of 12- and 24-hour variations in most HMI data products. While the instrument as presently calibrated (Couvidat et al. 2012 and 2016) meets all of the “Level-1” mission requirements it fails to meet the stated goal of 10 m/s accuracy for velocity data products. For the velocity measurements this has not been a significant problem since the prime HMI goals of obtaining data for helioseismology are not affected by this systematic error. However the orbit signal leaking into the magnetograms and vector magnetograms degrades the ability to accomplish some of the mission science goals at the expected levels of accuracy. This poster presents the current state of understanding of the source of this systematic error and prospects for near term improvement in the accuracy of the filter profile model.”

  4. Uncorrected refractive errors

    Directory of Open Access Journals (Sweden)

    Kovin S Naidoo

    2012-01-01

    Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.

  5. Errors in neuroradiology.

    Science.gov (United States)

    Caranci, Ferdinando; Tedeschi, Enrico; Leone, Giuseppe; Reginelli, Alfonso; Gatta, Gianluca; Pinto, Antonio; Squillaci, Ettore; Briganti, Francesco; Brunese, Luca

    2015-09-01

    Approximately 4 % of radiologic interpretation in daily practice contains errors and discrepancies that should occur in 2-20 % of reports. Fortunately, most of them are minor degree errors, or if serious, are found and corrected with sufficient promptness; obviously, diagnostic errors become critical when misinterpretation or misidentification should significantly delay medical or surgical treatments. Errors can be summarized into four main categories: observer errors, errors in interpretation, failure to suggest the next appropriate procedure, failure to communicate in a timely and a clinically appropriate manner. Misdiagnosis/misinterpretation percentage should rise up in emergency setting and in the first moments of the learning curve, as in residency. Para-physiological and pathological pitfalls in neuroradiology include calcification and brain stones, pseudofractures, and enlargement of subarachnoid or epidural spaces, ventricular system abnormalities, vascular system abnormalities, intracranial lesions or pseudolesions, and finally neuroradiological emergencies. In order to minimize the possibility of error, it is important to be aware of various presentations of pathology, obtain clinical information, know current practice guidelines, review after interpreting a diagnostic study, suggest follow-up studies when appropriate, communicate significant abnormal findings appropriately and in a timely fashion directly with the treatment team.

  6. Uncorrected refractive errors.

    Science.gov (United States)

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  7. Preventing Errors in Laterality

    OpenAIRE

    Landau, Elliot; Hirschorn, David; Koutras, Iakovos; Malek, Alexander; Demissie, Seleshie

    2014-01-01

    An error in laterality is the reporting of a finding that is present on the right side as on the left or vice versa. While different medical and surgical specialties have implemented protocols to help prevent such errors, very few studies have been published that describe these errors in radiology reports and ways to prevent them. We devised a system that allows the radiologist to view reports in a separate window, displayed in a simple font and with all terms of laterality highlighted in sep...

  8. Disclosure of medical errors.

    Science.gov (United States)

    Matlow, Anne; Stevens, Polly; Harrison, Christine; Laxer, Ronald M

    2006-12-01

    The 1999 release of the Institute of Medicine's document To Err is Human was akin to removing the lid of Pandora's box. Not only were the magnitude and impact of medical errors now apparent to those working in the health care industry, but consumers or health care were alerted to the occurrence of medical events causing harm. One specific solution advocated was the disclosure to patients and their families of adverse events resulting from medical error. Knowledge of the historical perspective, ethical underpinnings, and medico-legal implications gives us a better appreciation of current recommendations for disclosing adverse events resulting from medical error to those affected.

  9. On the error analysis of the meshless FDM and its multipoint extension

    Science.gov (United States)

    Jaworska, Irena

    2018-01-01

    The error analysis for the meshless methods, especially for the Meshless Finite Difference Method (MFDM), is discussed in the paper. Both a priori and a posteriori error estimations are considered. Experimental order of convergence confirms the theoretically developed a priori error bound. The higher order extension of the MFDM - the multipoint approach may be used as a source of the improved reference solution, instead of the true analytical one, for the global and local error estimation of the solution and residual errors. Several types of a posteriori error estimators are described. A variety of performed tests confirm high quality of a posteriori error estimation based on the multipoint MFDM.

  10. An Empirical State Error Covariance Matrix Orbit Determination Example

    Science.gov (United States)

    Frisbee, Joseph H., Jr.

    2015-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance

  11. Medical Errors Reduction Initiative

    National Research Council Canada - National Science Library

    Mutter, Michael L

    2005-01-01

    The Valley Hospital of Ridgewood, New Jersey, is proposing to extend a limited but highly successful specimen management and medication administration medical errors reduction initiative on a hospital-wide basis...

  12. Rounding errors in weighing

    International Nuclear Information System (INIS)

    Jeach, J.L.

    1976-01-01

    When rounding error is large relative to weighing error, it cannot be ignored when estimating scale precision and bias from calibration data. Further, if the data grouping is coarse, rounding error is correlated with weighing error and may also have a mean quite different from zero. These facts are taken into account in a moment estimation method. A copy of the program listing for the MERDA program that provides moment estimates is available from the author. Experience suggests that if the data fall into four or more cells or groups, it is not necessary to apply the moment estimation method. Rather, the estimate given by equation (3) is valid in this instance. 5 tables

  13. Spotting software errors sooner

    International Nuclear Information System (INIS)

    Munro, D.

    1989-01-01

    Static analysis is helping to identify software errors at an earlier stage and more cheaply than conventional methods of testing. RTP Software's MALPAS system also has the ability to check that a code conforms to its original specification. (author)

  14. Error Reporting Logic

    National Research Council Canada - National Science Library

    Jaspan, Ciera; Quan, Trisha; Aldrich, Jonathan

    2008-01-01

    ... it. In this paper, we introduce error reporting logic (ERL), an algorithm and tool that produces succinct explanations for why a target system violates a specification expressed in first order predicate logic...

  15. Pedal Application Errors

    Science.gov (United States)

    2012-03-01

    This project examined the prevalence of pedal application errors and the driver, vehicle, roadway and/or environmental characteristics associated with pedal misapplication crashes based on a literature review, analysis of news media reports, a panel ...

  16. Design for Error Tolerance

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1983-01-01

    An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability.......An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability....

  17. Inpatients’ medical prescription errors

    Directory of Open Access Journals (Sweden)

    Aline Melo Santos Silva

    2009-09-01

    Full Text Available Objective: To identify and quantify the most frequent prescription errors in inpatients’ medical prescriptions. Methods: A survey of prescription errors was performed in the inpatients’ medical prescriptions, from July 2008 to May 2009 for eight hours a day. Rresults: At total of 3,931 prescriptions was analyzed and 362 (9.2% prescription errors were found, which involved the healthcare team as a whole. Among the 16 types of errors detected in prescription, the most frequent occurrences were lack of information, such as dose (66 cases, 18.2% and administration route (26 cases, 7.2%; 45 cases (12.4% of wrong transcriptions to the information system; 30 cases (8.3% of duplicate drugs; doses higher than recommended (24 events, 6.6% and 29 cases (8.0% of prescriptions with indication but not specifying allergy. Cconclusion: Medication errors are a reality at hospitals. All healthcare professionals are responsible for the identification and prevention of these errors, each one in his/her own area. The pharmacist is an essential professional in the drug therapy process. All hospital organizations need a pharmacist team responsible for medical prescription analyses before preparation, dispensation and administration of drugs to inpatients. This study showed that the pharmacist improves the inpatient’s safety and success of prescribed therapy.

  18. Reliability and measurement error of 3-dimensional regional lumbar motion measures

    DEFF Research Database (Denmark)

    Mieritz, Rune M; Bronfort, Gert; Kawchuk, Greg

    2012-01-01

    The purpose of this study was to systematically review the literature on reproducibility (reliability and/or measurement error) of 3-dimensional (3D) regional lumbar motion measurement systems.......The purpose of this study was to systematically review the literature on reproducibility (reliability and/or measurement error) of 3-dimensional (3D) regional lumbar motion measurement systems....

  19. Error correction and degeneracy in surface codes suffering loss

    International Nuclear Information System (INIS)

    Stace, Thomas M.; Barrett, Sean D.

    2010-01-01

    Many proposals for quantum information processing are subject to detectable loss errors. In this paper, we give a detailed account of recent results in which we showed that topological quantum memories can simultaneously tolerate both loss errors and computational errors, with a graceful tradeoff between the threshold for each. We further discuss a number of subtleties that arise when implementing error correction on topological memories. We particularly focus on the role played by degeneracy in the matching algorithms and present a systematic study of its effects on thresholds. We also discuss some of the implications of degeneracy for estimating phase transition temperatures in the random bond Ising model.

  20. Errors from Image Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wood, William Monford [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    Presenting a systematic study of the standard analysis of rod-pinch radiographs for obtaining quantitative measurements of areal mass densities, and making suggestions for improving the methodology of obtaining quantitative information from radiographed objects.

  1. Human error in aviation operations

    Science.gov (United States)

    Nagel, David C.

    1988-01-01

    The role of human error in commercial and general aviation accidents and the techniques used to evaluate it are reviewed from a human-factors perspective. Topics addressed include the general decline in accidents per million departures since the 1960s, the increase in the proportion of accidents due to human error, methods for studying error, theoretical error models, and the design of error-resistant systems. Consideration is given to information acquisition and processing errors, visually guided flight, disorientation, instrument-assisted guidance, communication errors, decision errors, debiasing, and action errors.

  2. Error monitoring in musicians

    Directory of Open Access Journals (Sweden)

    Clemens eMaidhof

    2013-07-01

    Full Text Available To err is human, and hence even professional musicians make errors occasionally during their performances. This paper summarizes recent work investigating error monitoring in musicians, i.e. the processes and their neural correlates associated with the monitoring of ongoing actions and the detection of deviations from intended sounds. EEG Studies reported an early component of the event-related potential (ERP occurring before the onsets of pitch errors. This component, which can be altered in musicians with focal dystonia, likely reflects processes of error detection and/or error compensation, i.e. attempts to cancel the undesired sensory consequence (a wrong tone a musician is about to perceive. Thus, auditory feedback seems not to be a prerequisite for error detection, consistent with previous behavioral results. In contrast, when auditory feedback is externally manipulated and thus unexpected, motor performance can be severely distorted, although not all feedback alterations result in performance impairments. Recent studies investigating the neural correlates of feedback processing showed that unexpected feedback elicits an ERP component after note onsets, which shows larger amplitudes during music performance than during mere perception of the same musical sequences. Hence, these results stress the role of motor actions for the processing of auditory information. Furthermore, recent methodological advances like the combination of 3D motion capture techniques with EEG will be discussed. Such combinations of different measures can potentially help to disentangle the roles of different feedback types such as proprioceptive and auditory feedback, and in general to derive at a better understanding of the complex interactions between the motor and auditory domain during error monitoring. Finally, outstanding questions and future directions in this context will be discussed.

  3. Medication errors--new approaches to prevention.

    Science.gov (United States)

    Merry, Alan F; Anderson, Brian J

    2011-07-01

    Medication errors in pediatric anesthesia represent an important risk to children. Concerted action to reduce harm from this cause is overdue. An understanding of the genesis of avoidable adverse drug events may facilitate the development of effective countermeasures to the events or their effects. Errors include those involving the automatic system of cognition and those involving the reflective system. Errors and violations are distinct, but violations often predispose to error. The system of medication administration is complex, and many aspects of it are conducive to error. Evidence-based practices to reduce the risk of medication error in general include those encompassed by the following recommendations: systematic countermeasures should be used to decrease the number of drug administration errors in anesthesia; the label on any drug ampoule or syringe should be read carefully before a drug is drawn up or injected; the legibility and contents of labels on ampoules and syringes should be optimized according to agreed standards; syringes should always be labeled; formal organization of drug drawers and workspaces should be used; labels should be checked with a second person or a device before a drug is drawn up or administered. Dosage errors are particularly common in pediatric patients. Causes that should be addressed include a lack of pediatric formulations and/or presentations of medication that necessitates dilution before administration or the use of intravenous formulations for oral administration in children, a frequent failure to obtain accurate weights for patients and a paucity of pharmacokinetic and pharmacodynamic data. Technological innovations, including the use of bar codes and various cognitive aids, may facilitate compliance with these recommendations. Improved medication safety requires a system-wide strategy standardized at least to the level of the institution; it is the responsibility of institutional leadership to introduce such strategies

  4. Adaptive residual DPCM for lossless intra coding

    Science.gov (United States)

    Cai, Xun; Lim, Jae S.

    2015-03-01

    In the Differential Pulse-code Modulation (DPCM) image coding, the intensity of a pixel is predicted as a linear combination of a set of surrounding pixels and the prediction error is encoded. In this paper, we propose the adaptive residual DPCM (ARDPCM) for intra lossless coding. In the ARDPCM, intra residual samples are predicted using adaptive mode-dependent DPCM weights. The weights are estimated by minimizing the Mean Squared Error (MSE) of coded data and they are synchronized at the encoder and the decoder. The proposed method is implemented on the High Efficiency Video Coding (HEVC) reference software. Experimental results show that the ARDPCM significantly outperforms HEVC lossless coding and HEVC with the DPCM. The proposed method is also computationally efficient.

  5. Bayesian analysis of data and model error in rainfall-runoff hydrological models

    Science.gov (United States)

    Kavetski, D.; Franks, S. W.; Kuczera, G.

    2004-12-01

    A major unresolved issue in the identification and use of conceptual hydrologic models is realistic description of uncertainty in the data and model structure. In particular, hydrologic parameters often cannot be measured directly and must be inferred (calibrated) from observed forcing/response data (typically, rainfall and runoff). However, rainfall varies significantly in space and time, yet is often estimated from sparse gauge networks. Recent work showed that current calibration methods (e.g., standard least squares, multi-objective calibration, generalized likelihood uncertainty estimation) ignore forcing uncertainty and assume that the rainfall is known exactly. Consequently, they can yield strongly biased and misleading parameter estimates. This deficiency confounds attempts to reliably test model hypotheses, to generalize results across catchments (the regionalization problem) and to quantify predictive uncertainty when the hydrologic model is extrapolated. This paper continues the development of a Bayesian total error analysis (BATEA) methodology for the calibration and identification of hydrologic models, which explicitly incorporates the uncertainty in both the forcing and response data, and allows systematic model comparison based on residual model errors and formal Bayesian hypothesis testing (e.g., using Bayes factors). BATEA is based on explicit stochastic models for both forcing and response uncertainty, whereas current techniques focus solely on response errors. Hence, unlike existing methods, the BATEA parameter equations directly reflect the modeler's confidence in all the data. We compare several approaches to approximating the parameter distributions: a) full Markov Chain Monte Carlo methods and b) simplified approaches based on linear approximations. Studies using synthetic and real data from the US and Australia show that BATEA systematically reduces the parameter bias, leads to more meaningful model fits and allows model comparison taking

  6. Residual gas analysis

    International Nuclear Information System (INIS)

    Berecz, I.

    1982-01-01

    Determination of the residual gas composition in vacuum systems by a special mass spectrometric method was presented. The quadrupole mass spectrometer (QMS) and its application in thin film technology was discussed. Results, partial pressure versus time curves as well as the line spectra of the residual gases in case of the vaporization of a Ti-Pd-Au alloy were demonstrated together with the possible construction schemes of QMS residual gas analysers. (Sz.J.)

  7. Pediatric antidepressant medication errors in a national error reporting database.

    Science.gov (United States)

    Rinke, Michael L; Bundy, David G; Shore, Andrew D; Colantuoni, Elizabeth; Morlock, Laura L; Miller, Marlene R

    2010-01-01

    To describe inpatient and outpatient pediatric antidepressant medication errors. We analyzed all error reports from the United States Pharmacopeia MEDMARX database, from 2003 to 2006, involving antidepressant medications and patients younger than 18 years. Of the 451 error reports identified, 95% reached the patient, 6.4% reached the patient and necessitated increased monitoring and/or treatment, and 77% involved medications being used off label. Thirty-three percent of errors cited administering as the macrolevel cause of the error, 30% cited dispensing, 28% cited transcribing, and 7.9% cited prescribing. The most commonly cited medications were sertraline (20%), bupropion (19%), fluoxetine (15%), and trazodone (11%). We found no statistically significant association between medication and reported patient harm; harmful errors involved significantly more administering errors (59% vs 32%, p = .023), errors occurring in inpatient care (93% vs 68%, p = .012) and extra doses of medication (31% vs 10%, p = .025) compared with nonharmful errors. Outpatient errors involved significantly more dispensing errors (p errors due to inaccurate or omitted transcription (p errors. Family notification of medication errors was reported in only 12% of errors. Pediatric antidepressant errors often reach patients, frequently involve off-label use of medications, and occur with varying severity and type depending on location and type of medication prescribed. Education and research should be directed toward prompt medication error disclosure and targeted error reduction strategies for specific medication types and settings.

  8. Calculating SPRT Interpolation Error

    Science.gov (United States)

    Filipe, E.; Gentil, S.; Lóio, I.; Bosma, R.; Peruzzi, A.

    2018-02-01

    Interpolation error is a major source of uncertainty in the calibration of standard platinum resistance thermometer (SPRT) in the subranges of the International Temperature Scale of 1990 (ITS-90). This interpolation error arises because the interpolation equations prescribed by the ITS-90 cannot perfectly accommodate all the SPRTs natural variations in the resistance-temperature behavior, and generates different forms of non-uniqueness. This paper investigates the type 3 non-uniqueness for fourteen SPRTs of five different manufacturers calibrated over the water-zinc subrange and demonstrates the use of the method of divided differences for calculating the interpolation error. The calculated maximum standard deviation of 0.25 mK (near 100°C) is similar to that observed in previous studies.

  9. Standard practice for construction of a stepped block and its use to estimate errors produced by speed-of-sound measurement systems for use on solids

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1999-01-01

    1.1 This practice provides a means for evaluating both systematic and random errors for ultrasonic speed-of-sound measurement systems which are used for evaluating material characteristics associated with residual stress and which may also be used for nondestructive measurements of the dynamic elastic moduli of materials. Important features and construction details of a reference block crucial to these error evaluations are described. This practice can be used whenever the precision and bias of sound speed values are in question. 1.2 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  10. Errors in Neonatology

    Directory of Open Access Journals (Sweden)

    Antonio Boldrini

    2013-06-01

    Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research

  11. Error-finding and error-correcting methods for the start-up of the SLC

    International Nuclear Information System (INIS)

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.; Selig, L.J.

    1987-02-01

    During the commissioning of an accelerator, storage ring, or beam transfer line, one of the important tasks of an accelertor physicist is to check the first-order optics of the beam line and to look for errors in the system. Conceptually, it is important to distinguish between techniques for finding the machine errors that are the cause of the problem and techniques for correcting the beam errors that are the result of the machine errors. In this paper we will limit our presentation to certain applications of these two methods for finding or correcting beam-focus errors and beam-kick errors that affect the profile and trajectory of the beam respectively. Many of these methods have been used successfully in the commissioning of SLC systems. In order not to waste expensive beam time we have developed and used a beam-line simulator to test the ideas that have not been tested experimentally. To save valuable physicist's time we have further automated the beam-kick error-finding procedures by adopting methods from the field of artificial intelligence to develop a prototype expert system. Our experience with this prototype has demonstrated the usefulness of expert systems in solving accelerator control problems. The expert system is able to find the same solutions as an expert physicist but in a more systematic fashion. The methods used in these procedures and some of the recent applications will be described in this paper

  12. Geostatistical methods applied to field model residuals

    DEFF Research Database (Denmark)

    Maule, Fox; Mosegaard, K.; Olsen, Nils

    consists of measurement errors and unmodelled signal), and is typically assumed to be uncorrelated and Gaussian distributed. We have applied geostatistical methods to analyse the residuals of the Oersted(09d/04) field model [http://www.dsri.dk/Oersted/Field_models/IGRF_2005_candidates/], which is based......The geomagnetic field varies on a variety of time- and length scales, which are only rudimentary considered in most present field models. The part of the observed field that can not be explained by a given model, the model residuals, is often considered as an estimate of the data uncertainty (which...... on 5 years of Ørsted and CHAMP data, and includes secular variation and acceleration, as well as low-degree external (magnetospheric) and induced fields. The analysis is done in order to find the statistical behaviour of the space-time structure of the residuals, as a proxy for the data covariances...

  13. LIBERTARISMO & ERROR CATEGORIAL

    Directory of Open Access Journals (Sweden)

    Carlos G. Patarroyo G.

    2009-01-01

    Full Text Available En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibilidad de la libertad humana no necesariamente puede ser acusado de incurrir en ellos.

  14. State-Space Analysis of Model Error: A Probabilistic Parameter Estimation Framework with Spatial Analysis of Variance

    Science.gov (United States)

    2012-09-30

    atmospheric models and the chaotic growth of initial-condition (IC) error. The aim of our work is to provide new methods that begin to systematically disentangle the model inadequacy signal from the initial condition error signal.

  15. Cone-Beam CT Assessment of Interfraction and Intrafraction Setup Error of Two Head-and-Neck Cancer Thermoplastic Masks

    International Nuclear Information System (INIS)

    Velec, Michael; Waldron, John N.; O'Sullivan, Brian; Bayley, Andrew; Cummings, Bernard; Kim, John J.; Ringash, Jolie; Breen, Stephen L.; Lockwood, Gina A.; Dawson, Laura A.

    2010-01-01

    Purpose: To prospectively compare setup error in standard thermoplastic masks and skin-sparing masks (SSMs) modified with low neck cutouts for head-and-neck intensity-modulated radiation therapy (IMRT) patients. Methods and Materials: Twenty head-and-neck IMRT patients were randomized to be treated in a standard mask (SM) or SSM. Cone-beam computed tomography (CBCT) scans, acquired daily after both initial setup and any repositioning, were used for initial and residual interfraction evaluation, respectively. Weekly, post-IMRT CBCT scans were acquired for intrafraction setup evaluation. The population random (σ) and systematic (Σ) errors were compared for SMs and SSMs. Skin toxicity was recorded weekly by use of Radiation Therapy Oncology Group criteria. Results: We evaluated 762 CBCT scans in 11 patients randomized to the SM and 9 to the SSM. Initial interfraction σ was 1.6 mm or less or 1.1 deg. or less for SM and 2.0 mm or less and 0.8 deg. for SSM. Initial interfraction Σ was 1.0 mm or less or 1.4 deg. or less for SM and 1.1 mm or less or 0.9 deg. or less for SSM. These errors were reduced before IMRT with CBCT image guidance with no significant differences in residual interfraction or intrafraction uncertainties between SMs and SSMs. Intrafraction σ and Σ were less than 1 mm and less than 1 deg. for both masks. Less severe skin reactions were observed in the cutout regions of the SSM compared with non-cutout regions. Conclusions: Interfraction and intrafraction setup error is not significantly different for SSMs and conventional masks in head-and-neck radiation therapy. Mask cutouts should be considered for these patients in an effort to reduce skin toxicity.

  16. Common errors in image interpretation in oncology.

    Science.gov (United States)

    Vivas, I

    2018-02-20

    Errors in image interpretation are inevitable and generally multifactorial. They can be due to the radiologist's failure to interpret the findings correctly (including cognitive causes, perceptual errors, or ambiguity in reporting) or to problems related with the system (technical problems in image acquisition, incorrect clinical information, excessive workload, or inadequate working conditions). It is the radiologist's responsibility to know why errors occur and how to detect them to prevent them from occurring again. This article focuses on the problem of errors in diagnosing oncologic patients, both at the time of diagnosis and during follow-up as well as in the study of the response to treatment with new molecular therapies. To reduce possible errors, radiologists should ensure a systematic reading and an assessment of the oncologic response over time in the clinical context of the patient; they also need to have and apply knowledge of the new specific criteria for the response of each tumor type in the management of the patient. Copyright © 2018 SERAM. Publicado por Elsevier España, S.L.U. All rights reserved.

  17. Agricultural pesticide residues

    International Nuclear Information System (INIS)

    Fuehr, F.

    1984-01-01

    The utilization of tracer techniques in the study of agricultural pesticide residues is reviewed under the following headings: lysimeter experiments, micro-ecosystems, translocation in soil, degradation of pesticides in soil, biological availability of soil-applied substances, bound residues in the soil, use of macro- and microautography, double and triple labelling, use of tracer labelling in animal experiments. (U.K.)

  18. Strategies for reducing medication errors in the emergency department.

    Science.gov (United States)

    Weant, Kyle A; Bailey, Abby M; Baker, Stephanie N

    2014-01-01

    Medication errors are an all-too-common occurrence in emergency departments across the nation. This is largely secondary to a multitude of factors that create an almost ideal environment for medication errors to thrive. To limit and mitigate these errors, it is necessary to have a thorough knowledge of the medication-use process in the emergency department and develop strategies targeted at each individual step. Some of these strategies include medication-error analysis, computerized provider-order entry systems, automated dispensing cabinets, bar-coding systems, medication reconciliation, standardizing medication-use processes, education, and emergency-medicine clinical pharmacists. Special consideration also needs to be given to the development of strategies for the pediatric population, as they can be at an elevated risk of harm. Regardless of the strategies implemented, the prevention of medication errors begins and ends with the development of a culture that promotes the reporting of medication errors, and a systematic, nonpunitive approach to their elimination.

  19. Error management in audit firms: Error climate, type, and originator

    NARCIS (Netherlands)

    Gold, A.H.; Gronewold, U.; Salterio, S.E.

    2014-01-01

    This paper examines how the treatment of audit staff who discover errors in audit files by superiors affects their willingness to report these errors. The way staff are treated by superiors is labelled as the audit office error management climate. In a "blame-oriented" climate errors are not

  20. Error tolerance: an evaluation of residents' repeated motor coordination errors.

    Science.gov (United States)

    Law, Katherine E; Gwillim, Eran C; Ray, Rebecca D; D'Angelo, Anne-Lise D; Cohen, Elaine R; Fiers, Rebekah M; Rutherford, Drew N; Pugh, Carla M

    2016-10-01

    The study investigates the relationship between motor coordination errors and total errors using a human factors framework. We hypothesize motor coordination errors will correlate with total errors and provide validity evidence for error tolerance as a performance metric. Residents' laparoscopic skills were evaluated during a simulated laparoscopic ventral hernia repair for motor coordination errors when grasping for intra-abdominal mesh or suture. Tolerance was defined as repeated, failed attempts to correct an error and the time required to recover. Residents (N = 20) committed an average of 15.45 (standard deviation [SD] = 4.61) errors and 1.70 (SD = 2.25) motor coordination errors during mesh placement. Total errors correlated with motor coordination errors (r[18] = .572, P = .008). On average, residents required 5.09 recovery attempts for 1 motor coordination error (SD = 3.15). Recovery approaches correlated to total error load (r[13] = .592, P = .02). Residents' motor coordination errors and recovery approaches predict total error load. Error tolerance proved to be a valid assessment metric relating to overall performance. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Error Correcting Codes -34 ...

    Indian Academy of Sciences (India)

    Science, Bangalore. Her interests are in. Theoretical Computer. Science. SERIES I ARTICLE. Error Correcting Codes. 2. The Hamming Codes. Priti Shankar. In the first article of this series we showed how redundancy introduced into a message transmitted over a noisy channel could improve the reliability of transmission. In.

  2. Error Correcting Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March 1997 pp 33-47. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/002/03/0033-0047 ...

  3. Error Correcting Codes

    Indian Academy of Sciences (India)

    focused pictures of Triton, Neptune's largest moon. This great feat was in no small measure due to the fact that the sophisticated communication system on Voyager had an elaborate error correcting scheme built into it. At Jupiter and Saturn, a convolutional code was used to enhance the reliability of transmission, and at ...

  4. Error Correcting Codes

    Indian Academy of Sciences (India)

    It was engineering on the grand scale. - the use of new material for .... ROAD REPAIRSCE!STOP}!TL.,ZBFALK where errors occur in both the message as well as the check symbols, the decoder would be able to correct all of these (as there are not more than 8 .... before it is conveyed to the master disc. Modulation caters for.

  5. Error Correcting Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...

  6. Error Correcting Codes

    Indian Academy of Sciences (India)

    sound quality is, in essence, obtained by accurate waveform coding and decoding of the audio signals. In addition, the coded audio information is protected against disc errors by the use of a Cross Interleaved Reed-Solomon Code (CIRC). Reed-. Solomon codes were discovered by Irving Reed and Gus Solomon in 1960.

  7. Errors and ozone measurement

    Science.gov (United States)

    Mcpeters, Richard D.; Gleason, James F.

    1993-01-01

    It is held that Mimm's (1993) comparison of hand-held TOPS instrument data with the Nimbus 7 satellite's Total Ozone Mapping Spectrometer's (TOMS) ozone data was intrinsically flawed, in that the TOMS data were preliminary and therefore unsuited for quantitative analysis. It is noted that the TOMS calibration was in error.

  8. Random errors revisited

    DEFF Research Database (Denmark)

    Jacobsen, Finn

    2000-01-01

    It is well known that the random errors of sound intensity estimates can be much larger than the theoretical minimum value determined by the BT-product, in particular under reverberant conditions and when there are several sources present. More than ten years ago it was shown that one can predict...

  9. Team errors: definition and taxonomy

    International Nuclear Information System (INIS)

    Sasou, Kunihide; Reason, James

    1999-01-01

    In error analysis or error management, the focus is usually upon individuals who have made errors. In large complex systems, however, most people work in teams or groups. Considering this working environment, insufficient emphasis has been given to 'team errors'. This paper discusses the definition of team errors and its taxonomy. These notions are also applied to events that have occurred in the nuclear power industry, aviation industry and shipping industry. The paper also discusses the relations between team errors and Performance Shaping Factors (PSFs). As a result, the proposed definition and taxonomy are found to be useful in categorizing team errors. The analysis also reveals that deficiencies in communication, resource/task management, excessive authority gradient, excessive professional courtesy will cause team errors. Handling human errors as team errors provides an opportunity to reduce human errors

  10. Quantitative analysis of error mode, error effect and criticality

    International Nuclear Information System (INIS)

    Li Pengcheng; Zhang Li; Xiao Dongsheng; Chen Guohua

    2009-01-01

    The quantitative method of human error mode, effect and criticality is developed in order to reach the ultimate goal of Probabilistic Safety Assessment. The criticality identification matrix of human error mode and task is built to identify the critical human error mode and task and the critical organizational root causes on the basis of the identification of human error probability, error effect probability and the criticality index of error effect. Therefore, this will be beneficial to take targeted measures to reduce and prevent the occurrence of critical human error mode and task. Finally, the application of the technique is explained through the application example. (authors)

  11. A spectral synthesis method to suppress aliasing and calibrate for delay errors in Fourier transform correlators

    Science.gov (United States)

    Kaneko, T.; Grainge, K.

    2008-10-01

    Context: Fourier transform (or lag) correlators in radio interferometers can serve as an efficient means of synthesising spectral channels. However aliasing corrupts the edge channels so they usually have to be excluded from the data set. In systems with around 10 channels, the loss in sensitivity can be significant. In addition, the low level of residual aliasing in the remaining channels may cause systematic errors. Moreover, delay errors have been widely reported in implementations of broadband analogue correlators and simulations have shown that delay errors exasperate the effects of aliasing. Aims: We describe a software-based approach that suppresses aliasing by oversampling the cross-correlation function. This method can be applied to interferometers with individually-tracking antennas equipped with a discrete path compensator system. It is based on the well-known property of interferometers where the drift scan response is the Fourier transform of the source's band-limited spectrum. Methods: In this paper, we simulate a single baseline interferometer, both for a real and a complex correlator. Fringe-rotation usually compensates for the phase of the fringes to bring the phase centre in line with the tracking centre. Instead, a modified fringe-rotation is applied. This enables an oversampled cross-correlation function to be reconstructed by gathering successive time samples. Results: Simulations show that the oversampling method can synthesise the cross-power spectrum while avoiding aliasing and works robustly in the presence of noise. An important side benefit is that it naturally accounts for delay errors in the correlator and the resulting spectral channels are regularly gridded

  12. The contour method cutting assumption: error minimization and correction

    Energy Technology Data Exchange (ETDEWEB)

    Prime, Michael B [Los Alamos National Laboratory; Kastengren, Alan L [ANL

    2010-01-01

    The recently developed contour method can measure 2-D, cross-sectional residual-stress map. A part is cut in two using a precise and low-stress cutting technique such as electric discharge machining. The contours of the new surfaces created by the cut, which will not be flat if residual stresses are relaxed by the cutting, are then measured and used to calculate the original residual stresses. The precise nature of the assumption about the cut is presented theoretically and is evaluated experimentally. Simply assuming a flat cut is overly restrictive and misleading. The critical assumption is that the width of the cut, when measured in the original, undeformed configuration of the body is constant. Stresses at the cut tip during cutting cause the material to deform, which causes errors. The effect of such cutting errors on the measured stresses is presented. The important parameters are quantified. Experimental procedures for minimizing these errors are presented. An iterative finite element procedure to correct for the errors is also presented. The correction procedure is demonstrated on experimental data from a steel beam that was plastically bent to put in a known profile of residual stresses.

  13. Automatic Error Analysis Using Intervals

    Science.gov (United States)

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  14. Handling of Solid Residues

    International Nuclear Information System (INIS)

    Medina Bermudez, Clara Ines

    1999-01-01

    The topic of solid residues is specifically of great interest and concern for the authorities, institutions and community that identify in them a true threat against the human health and the atmosphere in the related with the aesthetic deterioration of the urban centers and of the natural landscape; in the proliferation of vectorial transmitters of illnesses and the effect on the biodiversity. Inside the wide spectrum of topics that they keep relationship with the environmental protection, the inadequate handling of solid residues and residues dangerous squatter an important line in the definition of political and practical environmentally sustainable. The industrial development and the population's growth have originated a continuous increase in the production of solid residues; of equal it forms, their composition day after day is more heterogeneous. The base for the good handling includes the appropriate intervention of the different stages of an integral administration of residues, which include the separation in the source, the gathering, the handling, the use, treatment, final disposition and the institutional organization of the administration. The topic of the dangerous residues generates more expectation. These residues understand from those of pathogen type that are generated in the establishments of health that of hospital attention, until those of combustible, inflammable type, explosive, radio-active, volatile, corrosive, reagent or toxic, associated to numerous industrial processes, common in our countries in development

  15. An Empirical State Error Covariance Matrix for Batch State Estimation

    Science.gov (United States)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the

  16. Analysis of Random Errors in Horizontal Sextant Angles

    Science.gov (United States)

    1980-09-01

    sea horizon, bringing the direct and ref’lected images into coincidence and reading the micrometer and vernier . This is repeated several times...differences due to the direction of rotation of the micrometer drum were examined as well as the variability in the determination of sextant index error. / DD...minutes of arc respec- tively. In addition, systematic errors resulting from angular differences due to the direction of rotation of the micrometer drum

  17. Video Error Correction Using Steganography

    Directory of Open Access Journals (Sweden)

    Robie David L

    2002-01-01

    Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  18. Video Error Correction Using Steganography

    Science.gov (United States)

    Robie, David L.; Mersereau, Russell M.

    2002-12-01

    The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  19. Error reduction in surgical pathology.

    Science.gov (United States)

    Nakhleh, Raouf E

    2006-05-01

    Because of its complex nature, surgical pathology practice is inherently error prone. Currently, there is pressure to reduce errors in medicine, including pathology. To review factors that contribute to errors and to discuss error-reduction strategies. Literature review. Multiple factors contribute to errors in medicine, including variable input, complexity, inconsistency, tight coupling, human intervention, time constraints, and a hierarchical culture. Strategies that may reduce errors include reducing reliance on memory, improving information access, error-proofing processes, decreasing reliance on vigilance, standardizing tasks and language, reducing the number of handoffs, simplifying processes, adjusting work schedules and environment, providing adequate training, and placing the correct people in the correct jobs. Surgical pathology is a complex system with ample opportunity for error. Significant error reduction is unlikely to occur without a sustained comprehensive program of quality control and quality assurance. Incremental adoption of information technology and automation along with improved training in patient safety and quality management can help reduce errors.

  20. Effects of Target Positioning Error on Motion Compensation for Airborne Interferometric SAR

    Directory of Open Access Journals (Sweden)

    Li Yin-wei

    2013-12-01

    Full Text Available The measurement inaccuracies of Inertial Measurement Unit/Global Positioning System (IMU/GPS as well as the positioning error of the target may contribute to the residual uncompensated motion errors in the MOtion COmpensation (MOCO approach based on the measurement of IMU/GPS. Aiming at the effects of target positioning error on MOCO for airborne interferometric SAR, the paper firstly deduces a mathematical model of residual motion error bring out by target positioning error under the condition of squint. And the paper analyzes the effects on the residual motion error caused by system sampling delay error, the Doppler center frequency error and reference DEM error which result in target positioning error based on the model. Then, the paper discusses the effects of the reference DEM error on the interferometric SAR image quality, the interferometric phase and the coherent coefficient. The research provides theoretical bases for the MOCO precision in signal processing of airborne high precision SAR and airborne repeat-pass interferometric SAR.

  1. Human Error In Complex Systems

    Science.gov (United States)

    Morris, Nancy M.; Rouse, William B.

    1991-01-01

    Report presents results of research aimed at understanding causes of human error in such complex systems as aircraft, nuclear powerplants, and chemical processing plants. Research considered both slips (errors of action) and mistakes (errors of intention), and influence of workload on them. Results indicated that: humans respond to conditions in which errors expected by attempting to reduce incidence of errors; and adaptation to conditions potent influence on human behavior in discretionary situations.

  2. Transition Models with Measurement Errors

    OpenAIRE

    Magnac, Thierry; Visser, Michael

    1999-01-01

    In this paper, we estimate a transition model that allows for measurement errors in the data. The measurement errors arise because the survey design is partly retrospective, so that individuals sometimes forget or misclassify their past labor market transitions. The observed data are adjusted for errors via a measurement-error mechanism. The parameters of the distribution of the true data, and those of the measurement-error mechanism are estimated by a two-stage method. The results, based on ...

  3. Common errors in disease mapping

    Directory of Open Access Journals (Sweden)

    Ricardo Ocaña-Riola

    2010-05-01

    Full Text Available Many morbid-mortality atlases and small-area studies have been carried out over the last decade. However, the methods used to draw up such research, the interpretation of results and the conclusions published are often inaccurate. Often, the proliferation of this practice has led to inefficient decision-making, implementation of inappropriate health policies and negative impact on the advancement of scientific knowledge. This paper reviews the most frequent errors in the design, analysis and interpretation of small-area epidemiological studies and proposes a diagnostic evaluation test that should enable the scientific quality of published papers to be ascertained. Nine common mistakes in disease mapping methods are discussed. From this framework, and following the theory of diagnostic evaluation, a standardised test to evaluate the scientific quality of a small-area epidemiology study has been developed. Optimal quality is achieved with the maximum score (16 points, average with a score between 8 and 15 points, and low with a score of 7 or below. A systematic evaluation of scientific papers, together with an enhanced quality in future research, will contribute towards increased efficacy in epidemiological surveillance and in health planning based on the spatio-temporal analysis of ecological information.

  4. A simple error classification system for understanding sources of error in automatic speech recognition and human transcription.

    Science.gov (United States)

    Zafar, Atif; Mamlin, Burke; Perkins, Susan; Belsito, Anne M; Overhage, J Marc; McDonald, Clement J

    2004-09-01

    To (1) discover the types of errors most commonly found in clinical notes that are generated either using automatic speech recognition (ASR) or via human transcription and (2) to develop efficient rules for classifying these errors based on the categories found in (1). The purpose of classifying errors into categories is to understand the underlying processes that generate these errors, so that measures can be taken to improve these processes. We integrated the Dragon NaturallySpeaking v4.0 speech recognition engine into the Regenstrief Medical Record System. We captured the text output of the speech engine prior to error correction by the speaker. We also acquired a set of human transcribed but uncorrected notes for comparison. We then attempted to error correct these notes based on looking at the context alone. Initially, three domain experts independently examined 104 ASR notes (containing 29,144 words) generated by a single speaker and 44 human transcribed notes (containing 14,199 words) generated by multiple speakers for errors. Collaborative group sessions were subsequently held where error categorizes were determined and rules developed and incrementally refined for systematically examining the notes and classifying errors. We found that the errors could be classified into nine categories: (1) announciation errors occurring due to speaker mispronounciation, (2) dictionary errors resulting from missing terms, (3) suffix errors caused by misrecognition of appropriate tenses of a word, (4) added words, (5) deleted words, (6) homonym errors resulting from substitution of a phonetically identical word, (7) spelling errors, (8) nonsense errors, words/phrases whose meaning could not be appreciated by examining just the context, and (9) critical errors, words/phrases where a reader of a note could potentially misunderstand the concept that was related by the speaker. A simple method is presented for examining errors in transcribed documents and classifying these

  5. Measurement System Characterization in the Presence of Measurement Errors

    Science.gov (United States)

    Commo, Sean A.

    2012-01-01

    In the calibration of a measurement system, data are collected in order to estimate a mathematical model between one or more factors of interest and a response. Ordinary least squares is a method employed to estimate the regression coefficients in the model. The method assumes that the factors are known without error; yet, it is implicitly known that the factors contain some uncertainty. In the literature, this uncertainty is known as measurement error. The measurement error affects both the estimates of the model coefficients and the prediction, or residual, errors. There are some methods, such as orthogonal least squares, that are employed in situations where measurement errors exist, but these methods do not directly incorporate the magnitude of the measurement errors. This research proposes a new method, known as modified least squares, that combines the principles of least squares with knowledge about the measurement errors. This knowledge is expressed in terms of the variance ratio - the ratio of response error variance to measurement error variance.

  6. Reporting Self-Made Errors: The Impact of Organizational Error-Management Climate and Error Type

    NARCIS (Netherlands)

    Gold, A.H.; Gronewold, U.; Salterio, S.E.

    2013-01-01

    We study how an organization's error-management climate affects organizational members' beliefs about other members' willingness to report errors that they discover when chance of error detection by superiors and others is extremely low. An error-management climate, as a component of the

  7. Improving blood safety: Errors management in transfusion medicine

    Directory of Open Access Journals (Sweden)

    Bujandrić Nevenka

    2014-01-01

    Full Text Available Introduction. The concept of blood safety includes the entire transfusion chain starting with the collection of blood from the blood donor, and ending with blood transfusion to the patient. The concept involves quality management system as the systematic monitoring of adverse reactions and incidents regarding the blood donor or patient. Monitoring of near-miss errors show the critical points in the working process and increase transfusion safety. Objective. The aim of the study was to present the analysis results of adverse and unexpected events in transfusion practice with a potential risk to the health of blood donors and patients. Methods. One-year retrospective study was based on the collection, analysis and interpretation of written reports on medical errors in the Blood Transfusion Institute of Vojvodina. Results. Errors were distributed according to the type, frequency and part of the working process where they occurred. Possible causes and corrective actions were described for each error. The study showed that there were not errors with potential health consequences for the blood donor/patient. Errors with potentially damaging consequences for patients were detected throughout the entire transfusion chain. Most of the errors were identified in the preanalytical phase. The human factor was responsible for the largest number of errors. Conclusion. Error reporting system has an important role in the error management and the reduction of transfusion-related risk of adverse events and incidents. The ongoing analysis reveals the strengths and weaknesses of the entire process and indicates the necessary changes. Errors in transfusion medicine can be avoided in a large percentage and prevention is costeffective, systematic and applicable.

  8. [Residual neuromuscular blockade].

    Science.gov (United States)

    Fuchs-Buder, T; Schmartz, D

    2017-06-01

    Even small degrees of residual neuromuscular blockade, i. e. a train-of-four (TOF) ratio >0.6, may lead to clinically relevant consequences for the patient. Especially upper airway integrity and the ability to swallow may still be markedly impaired. Moreover, increasing evidence suggests that residual neuromuscular blockade may affect postoperative outcome of patients. The incidence of these small degrees of residual blockade is relatively high and may persist for more than 90 min after a single intubating dose of an intermediately acting neuromuscular blocking agent, such as rocuronium and atracurium. Both neuromuscular monitoring and pharmacological reversal are key elements for the prevention of postoperative residual blockade.

  9. TENORM: Wastewater Treatment Residuals

    Science.gov (United States)

    Water and wastes which have been discharged into municipal sewers are treated at wastewater treatment plants. These may contain trace amounts of both man-made and naturally occurring radionuclides which can accumulate in the treatment plant and residuals.

  10. Error analysis for pesticide detection performed on paper-based microfluidic chip devices

    Science.gov (United States)

    Yang, Ning; Shen, Kai; Guo, Jianjiang; Tao, Xinyi; Xu, Peifeng; Mao, Hanping

    2017-07-01

    Paper chip is an efficient and inexpensive device for pesticide residues detection. However, the reasons of detection error are not clear, which is the main problem to hinder the development of pesticide residues detection. This paper focuses on error analysis for pesticide detection performed on paper-based microfluidic chip devices, which test every possible factor to build the mathematical models for detection error. In the result, double-channel structure is selected as the optimal chip structure to reduce detection error effectively. The wavelength of 599.753 nm is chosen since it is the most sensitive detection wavelength to the variation of pesticide concentration. At last, the mathematical models of detection error for detection temperature and prepared time are concluded. This research lays a theory foundation on accurate pesticide residues detection based on paper-based microfluidic chip devices.

  11. Residuation in orthomodular lattices

    Directory of Open Access Journals (Sweden)

    Chajda Ivan

    2017-04-01

    Full Text Available We show that every idempotent weakly divisible residuated lattice satisfying the double negation law can be transformed into an orthomodular lattice. The converse holds if adjointness is replaced by conditional adjointness. Moreover, we show that every positive right residuated lattice satisfying the double negation law and two further simple identities can be converted into an orthomodular lattice. In this case, also the converse statement is true and the corresponence is nearly one-to-one.

  12. Characterization of Hospital Residuals

    International Nuclear Information System (INIS)

    Blanco Meza, A.; Bonilla Jimenez, S.

    1997-01-01

    The main objective of this investigation is the characterization of the solid residuals. A description of the handling of the liquid and gassy waste generated in hospitals is also given, identifying the source where they originate. To achieve the proposed objective the work was divided in three stages: The first one was the planning and the coordination with each hospital center, in this way, to determine the schedule of gathering of the waste can be possible. In the second stage a fieldwork was made; it consisted in gathering the quantitative and qualitative information of the general state of the handling of residuals. In the third and last stage, the information previously obtained was organized to express the results as the production rate per day by bed, generation of solid residuals for sampled services, type of solid residuals and density of the same ones. With the obtained results, approaches are settled down to either determine design parameters for final disposition whether for incineration, trituration, sanitary filler or recycling of some materials, and storage politics of the solid residuals that allow to determine the gathering frequency. The study concludes that it is necessary to improve the conditions of the residuals handling in some aspects, to provide the cleaning personnel of the equipment for gathering disposition and of security, minimum to carry out this work efficiently, and to maintain a control of all the dangerous waste, like sharp or polluted materials. In this way, an appreciable reduction is guaranteed in the impact on the atmosphere. (Author) [es

  13. Calibration Errors in Interferometric Radio Polarimetry

    Science.gov (United States)

    Hales, Christopher A.

    2017-08-01

    Residual calibration errors are difficult to predict in interferometric radio polarimetry because they depend on the observational calibration strategy employed, encompassing the Stokes vector of the calibrator and parallactic angle coverage. This work presents analytic derivations and simulations that enable examination of residual on-axis instrumental leakage and position-angle errors for a suite of calibration strategies. The focus is on arrays comprising alt-azimuth antennas with common feeds over which parallactic angle is approximately uniform. The results indicate that calibration schemes requiring parallactic angle coverage in the linear feed basis (e.g., the Atacama Large Millimeter/submillimeter Array) need only observe over 30°, beyond which no significant improvements in calibration accuracy are obtained. In the circular feed basis (e.g., the Very Large Array above 1 GHz), 30° is also appropriate when the Stokes vector of the leakage calibrator is known a priori, but this rises to 90° when the Stokes vector is unknown. These findings illustrate and quantify concepts that were previously obscure rules of thumb.

  14. Diagnostic errors in pediatric radiology

    International Nuclear Information System (INIS)

    Taylor, George A.; Voss, Stephan D.; Melvin, Patrice R.; Graham, Dionne A.

    2011-01-01

    Little information is known about the frequency, types and causes of diagnostic errors in imaging children. Our goals were to describe the patterns and potential etiologies of diagnostic error in our subspecialty. We reviewed 265 cases with clinically significant diagnostic errors identified during a 10-year period. Errors were defined as a diagnosis that was delayed, wrong or missed; they were classified as perceptual, cognitive, system-related or unavoidable; and they were evaluated by imaging modality and level of training of the physician involved. We identified 484 specific errors in the 265 cases reviewed (mean:1.8 errors/case). Most discrepancies involved staff (45.5%). Two hundred fifty-eight individual cognitive errors were identified in 151 cases (mean = 1.7 errors/case). Of these, 83 cases (55%) had additional perceptual or system-related errors. One hundred sixty-five perceptual errors were identified in 165 cases. Of these, 68 cases (41%) also had cognitive or system-related errors. Fifty-four system-related errors were identified in 46 cases (mean = 1.2 errors/case) of which all were multi-factorial. Seven cases were unavoidable. Our study defines a taxonomy of diagnostic errors in a large academic pediatric radiology practice and suggests that most are multi-factorial in etiology. Further study is needed to define effective strategies for improvement. (orig.)

  15. Laser Doppler anemometer measurements using nonorthogonal velocity components: error estimates.

    Science.gov (United States)

    Orloff, K L; Snyder, P K

    1982-01-15

    Laser Doppler anemometers (LDAs) that are arranged to measure nonorthogonal velocity components (from which orthogonal components are computed through transformation equations) are more susceptible to calibration and sampling errors than are systems with uncoupled channels. In this paper uncertainty methods and estimation theory are used to evaluate, respectively, the systematic and statistical errors that are present when such devices are applied to the measurement of mean velocities in turbulent flows. Statistical errors are estimated for two-channel LDA data that are either correlated or uncorrelated. For uncorrelated data the directional uncertainty of the measured velocity vector is considered for applications where mean streamline patterns are desired.

  16. A study of respiration-correlated cone-beam CT scans to correct target positioning errors in radiotherapy of thoracic cancer

    International Nuclear Information System (INIS)

    Santoro, J. P.; McNamara, J.; Yorke, E.; Pham, H.; Rimner, A.; Rosenzweig, K. E.; Mageras, G. S.

    2012-01-01

    Purpose: There is increasingly widespread usage of cone-beam CT (CBCT) for guiding radiation treatment in advanced-stage lung tumors, but difficulties associated with daily CBCT in conventionally fractionated treatments include imaging dose to the patient, increased workload and longer treatment times. Respiration-correlated cone-beam CT (RC-CBCT) can improve localization accuracy in mobile lung tumors, but further increases the time and workload for conventionally fractionated treatments. This study investigates whether RC-CBCT-guided correction of systematic tumor deviations in standard fractionated lung tumor radiation treatments is more effective than 2D image-based correction of skeletal deviations alone. A second study goal compares respiration-correlated vs respiration-averaged images for determining tumor deviations. Methods: Eleven stage II–IV nonsmall cell lung cancer patients are enrolled in an IRB-approved prospective off-line protocol using RC-CBCT guidance to correct for systematic errors in GTV position. Patients receive a respiration-correlated planning CT (RCCT) at simulation, daily kilovoltage RC-CBCT scans during the first week of treatment and weekly scans thereafter. Four types of correction methods are compared: (1) systematic error in gross tumor volume (GTV) position, (2) systematic error in skeletal anatomy, (3) daily skeletal corrections, and (4) weekly skeletal corrections. The comparison is in terms of weighted average of the residual GTV deviations measured from the RC-CBCT scans and representing the estimated residual deviation over the treatment course. In the second study goal, GTV deviations computed from matching RCCT and RC-CBCT are compared to deviations computed from matching respiration-averaged images consisting of a CBCT reconstructed using all projections and an average-intensity-projection CT computed from the RCCT. Results: Of the eleven patients in the GTV-based systematic correction protocol, two required no correction

  17. Medication errors in prehospital management of simulated pediatric anaphylaxis.

    Science.gov (United States)

    Lammers, Richard; Willoughby-Byrwa, Maria; Fales, William

    2014-01-01

    Systematic evaluation of the performances of prehospital providers during actual pediatric anaphylaxis cases has never been reported. Epinephrine medication errors in pediatric resuscitation are common, but the root causes of these errors are not fully understood. The primary objective of this study was to identify underlying causes of prehospital medication errors that were observed during a simulated pediatric anaphylaxis reaction. Two- and 4-person emergency medical services crews from eight geographically diverse agencies participated in a 20-minute simulation of a 5-year old child with progressive respiratory distress and hypotension from an anaphylactic reaction. Crews used their own equipment and drugs. A checklist-based scoring protocol was developed to help identify errors. A trained facilitator conducted a structured debriefing, supplemented by playback of video recordings, immediately after the simulated event to elicit underlying causes of errors. Errors were analyzed with mixed quantitative and qualitative methods. One hundred forty-two subjects participated in 62 simulation sessions. Ninety-five percent of crews (59/62) gave epinephrine, but 27 of those crews (46%) delivered the correct dose of epinephrine in an appropriate concentration and route. Twelve crews (20%) gave a dose that was ≥5 times the correct dose; 8 crews (14%) bolused epinephrine intravenously. Among the 55 crews who gave diphenhydramine, 4 delivered the protocol-based dose. Three crews provided an intravenous steroid, and 1 used the protocol-based dose. Underlying causes of errors were categorized into eight themes: faulty reasoning, weight estimation errors, faulty recall of medication dosages, problematic references, calculation errors, dose estimation, communication errors, and medication delivery errors. Simulation, followed by a structured debriefing, identified multiple, underlying causes of medication errors in the prehospital management of pediatric anaphylactic reactions

  18. Distinguishing Errors in Measurement from Errors in Optimization

    OpenAIRE

    Rulon D. Pope; Richard E. Just

    2003-01-01

    Typical econometric production practices under duality ignore the source of disturbances. We show that, depending on the source, a different approach to estimation is required. The typical approach applies under errors in factor input measurement rather than errors in optimization. An approach to the identification of disturbance sources is suggested. We find credible evidence in U.S. agriculture of errors in optimization compared to errors of measurement, and thus reject the typical specific...

  19. Minimum Error Entropy Classification

    CERN Document Server

    Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A

    2013-01-01

    This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.

  20. Weld residual stresses near the bimetallic interface in clad RPV steel: A comparison between deep-hole drilling and neutron diffraction data

    Energy Technology Data Exchange (ETDEWEB)

    James, M.N., E-mail: mjames@plymouth.ac.uk [School of Marine Science and Engineering, University of Plymouth, Drake Circus, Plymouth (United Kingdom); Department of Mechanical Engineering, Nelson Mandela Metropolitan University, Port Elizabeth (South Africa); Newby, M.; Doubell, P. [Eskom Holdings SOC Ltd, Lower Germiston Road, Rosherville, Johannesburg (South Africa); Hattingh, D.G. [Department of Mechanical Engineering, Nelson Mandela Metropolitan University, Port Elizabeth (South Africa); Serasli, K.; Smith, D.J. [Department of Mechanical Engineering, University of Bristol, Queen' s Building, University Walk, Bristol (United Kingdom)

    2014-07-01

    Highlights: • Identification of residual stress trends across bimetallic interface in stainless clad RPV. • Comparison between deep hole drilling (DHD – stress components in two directions) and neutron diffraction (ND – stress components in three directions). • Results indicate that both techniques can assess the trends in residual stress across the interface. • Neutron diffraction gives more detailed information on transient residual stress peaks. - Abstract: The inner surface of ferritic steel reactor pressure vessels (RPV) is clad with strip welded austenitic stainless steel primarily to increase the long-term corrosion resistance of the ferritic vessel. The strip welding process used in the cladding operation induces significant residual stresses in the clad layer and in the RPV steel substrate, arising both from the thermal cycle and from the very different thermal and mechanical properties of the austenitic clad layer and the ferritic RPV steel. This work measures residual stresses using the deep hole drilling (DHD) and neutron diffraction (ND) techniques and compares residual stress data obtained by the two methods in a stainless clad coupon of A533B Class 2 steel. The results give confidence that both techniques are capable of assessing the trends in residual stresses, and their magnitudes. Significant differences are that the ND data shows greater values of the tensile stress peaks (∼100 MPa) than the DHD data but has a higher systematic error associated with it. The stress peaks are sharper with the ND technique and also differ in spatial position by around 1 mm compared with the DHD technique.

  1. Weld residual stresses near the bimetallic interface in clad RPV steel: A comparison between deep-hole drilling and neutron diffraction data

    International Nuclear Information System (INIS)

    James, M.N.; Newby, M.; Doubell, P.; Hattingh, D.G.; Serasli, K.; Smith, D.J.

    2014-01-01

    Highlights: • Identification of residual stress trends across bimetallic interface in stainless clad RPV. • Comparison between deep hole drilling (DHD – stress components in two directions) and neutron diffraction (ND – stress components in three directions). • Results indicate that both techniques can assess the trends in residual stress across the interface. • Neutron diffraction gives more detailed information on transient residual stress peaks. - Abstract: The inner surface of ferritic steel reactor pressure vessels (RPV) is clad with strip welded austenitic stainless steel primarily to increase the long-term corrosion resistance of the ferritic vessel. The strip welding process used in the cladding operation induces significant residual stresses in the clad layer and in the RPV steel substrate, arising both from the thermal cycle and from the very different thermal and mechanical properties of the austenitic clad layer and the ferritic RPV steel. This work measures residual stresses using the deep hole drilling (DHD) and neutron diffraction (ND) techniques and compares residual stress data obtained by the two methods in a stainless clad coupon of A533B Class 2 steel. The results give confidence that both techniques are capable of assessing the trends in residual stresses, and their magnitudes. Significant differences are that the ND data shows greater values of the tensile stress peaks (∼100 MPa) than the DHD data but has a higher systematic error associated with it. The stress peaks are sharper with the ND technique and also differ in spatial position by around 1 mm compared with the DHD technique

  2. Management of NORM Residues

    International Nuclear Information System (INIS)

    2013-06-01

    The IAEA attaches great importance to the dissemination of information that can assist Member States in the development, implementation, maintenance and continuous improvement of systems, programmes and activities that support the nuclear fuel cycle and nuclear applications, and that address the legacy of past practices and accidents. However, radioactive residues are found not only in nuclear fuel cycle activities, but also in a range of other industrial activities, including: - Mining and milling of metalliferous and non-metallic ores; - Production of non-nuclear fuels, including coal, oil and gas; - Extraction and purification of water (e.g. in the generation of geothermal energy, as drinking and industrial process water; in paper and pulp manufacturing processes); - Production of industrial minerals, including phosphate, clay and building materials; - Use of radionuclides, such as thorium, for properties other than their radioactivity. Naturally occurring radioactive material (NORM) may lead to exposures at some stage of these processes and in the use or reuse of products, residues or wastes. Several IAEA publications address NORM issues with a special focus on some of the more relevant industrial operations. This publication attempts to provide guidance on managing residues arising from different NORM type industries, and on pertinent residue management strategies and technologies, to help Member States gain perspectives on the management of NORM residues

  3. Payment Error Rate Measurement (PERM)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The PERM program measures improper payments in Medicaid and CHIP and produces error rates for each program. The error rates are based on reviews of the...

  4. Standard Errors for Matrix Correlations.

    Science.gov (United States)

    Ogasawara, Haruhiko

    1999-01-01

    Derives the asymptotic standard errors and intercorrelations for several matrix correlations assuming multivariate normality for manifest variables and derives the asymptotic standard errors of the matrix correlations for two factor-loading matrices. (SLD)

  5. Soft Error Vulnerability of Iterative Linear Algebra Methods

    Energy Technology Data Exchange (ETDEWEB)

    Bronevetsky, G; de Supinski, B

    2007-12-15

    Devices become increasingly vulnerable to soft errors as their feature sizes shrink. Previously, soft errors primarily caused problems for space and high-atmospheric computing applications. Modern architectures now use features so small at sufficiently low voltages that soft errors are becoming significant even at terrestrial altitudes. The soft error vulnerability of iterative linear algebra methods, which many scientific applications use, is a critical aspect of the overall application vulnerability. These methods are often considered invulnerable to many soft errors because they converge from an imprecise solution to a precise one. However, we show that iterative methods can be vulnerable to soft errors, with a high rate of silent data corruptions. We quantify this vulnerability, with algorithms generating up to 8.5% erroneous results when subjected to a single bit-flip. Further, we show that detecting soft errors in an iterative method depends on its detailed convergence properties and requires more complex mechanisms than simply checking the residual. Finally, we explore inexpensive techniques to tolerate soft errors in these methods.

  6. Evaluation of positioning errors of the patient using cone beam CT megavoltage; Evaluacion de errores de posicionamiento del paciente mediante Cone Beam CT de megavoltaje

    Energy Technology Data Exchange (ETDEWEB)

    Garcia Ruiz-Zorrilla, J.; Fernandez Leton, J. P.; Zucca Aparicio, D.; Perez Moreno, J. M.; Minambres Moro, A.

    2013-07-01

    Image-guided radiation therapy allows you to assess and fix the positioning of the patient in the treatment unit, thus reducing the uncertainties due to the positioning of the patient. This work assesses errors systematic and errors of randomness from the corrections made to a series of patients of different diseases through a protocol off line of cone beam CT (CBCT) megavoltage. (Author)

  7. Error budget analysis of SCIAMACHY limb ozone profile retrievals using the SCIATRAN model

    Directory of Open Access Journals (Sweden)

    N. Rahpoe

    2013-10-01

    Full Text Available A comprehensive error characterization of SCIAMACHY (Scanning Imaging Absorption Spectrometer for Atmospheric CHartographY limb ozone profiles has been established based upon SCIATRAN transfer model simulations. The study was carried out in order to evaluate the possible impact of parameter uncertainties, e.g. in albedo, stratospheric aerosol optical extinction, temperature, pressure, pointing, and ozone absorption cross section on the limb ozone retrieval. Together with the a posteriori covariance matrix available from the retrieval, total random and systematic errors are defined for SCIAMACHY ozone profiles. Main error sources are the pointing errors, errors in the knowledge of stratospheric aerosol parameters, and cloud interference. Systematic errors are of the order of 7%, while the random error amounts to 10–15% for most of the stratosphere. These numbers can be used for the interpretation of instrument intercomparison and validation of the SCIAMACHY V 2.5 limb ozone profiles in a rigorous manner.

  8. Error Detection in ESL Teaching

    OpenAIRE

    Rogoveanu Raluca

    2011-01-01

    This study investigates the role of error correction in the larger paradigm of ESL teaching and learning. It conceptualizes error as an inevitable variable in the process of learning and as a frequently occurring element in written and oral discourses of ESL learners. It also identifies specific strategies in which error can be detected and corrected and makes reference to various theoretical trends and their approach to error correction, as well as to the relation between language instructor...

  9. [The error, source of learning].

    Science.gov (United States)

    Joyeux, Stéphanie; Bohic, Valérie

    2016-05-01

    The error itself is not recognised as a fault. It is the intentionality which differentiates between an error and a fault. An error is unintentional while a fault is a failure to respect known rules. The risk of error is omnipresent in health institutions. Public authorities have therefore set out a series of measures to reduce this risk. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  10. Human error in strabismus surgery: Quantification with a sensitivity analysis

    NARCIS (Netherlands)

    S. Schutte (Sander); J.R. Polling (Jan Roelof); F.C.T. van der Helm (Frans); H.J. Simonsz (Huib)

    2009-01-01

    textabstractBackground: Reoperations are frequently necessary in strabismus surgery. The goal of this study was to analyze human-error related factors that introduce variability in the results of strabismus surgery in a systematic fashion. Methods: We identified the primary factors that influence

  11. Human error in strabismus surgery : Quantification with a sensitivity analysis

    NARCIS (Netherlands)

    Schutte, S.; Polling, J.R.; Van der Helm, F.C.T.; Simonsz, H.J.

    2008-01-01

    Background- Reoperations are frequently necessary in strabismus surgery. The goal of this study was to analyze human-error related factors that introduce variability in the results of strabismus surgery in a systematic fashion. Methods- We identified the primary factors that influence the outcome of

  12. Pitch Error Analysis of Young Piano Students' Music Reading Performances

    Science.gov (United States)

    Rut Gudmundsdottir, Helga

    2010-01-01

    This study analyzed the music reading performances of 6-13-year-old piano students (N = 35) in their second year of piano study. The stimuli consisted of three piano pieces, systematically constructed to vary in terms of left-hand complexity and input simultaneity. The music reading performances were recorded digitally and a code of error analysis…

  13. GMM estimation in panel data models with measurement error

    NARCIS (Netherlands)

    Wansbeek, T.J.

    Griliches and Hausman (J. Econom. 32 (1986) 93) have introduced GMM estimation in panel data models with measurement error. We present a simple, systematic approach to derive moment conditions for such models under a variety of assumptions. (C) 2001 Elsevier Science S.A. All rights reserved.

  14. Proposed systematic methodology for analysis of Pb-210 radioactivity in residues produced in Brazilian natural gas pipes; Proposicao de um modelo analitico sistematico da atividade de Pb-210 em residuos gerados em linhas de gas

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, Aloisio Cordilha

    2003-11-15

    Since the 80's, the potential radiological hazards due to the handling of solid wastes contaminated with Rn-222 long-lived progeny - Pb-210 in special - produced in gas pipes and removed by pig operations have been subject of growing concern abroad our country. Nevertheless, little or no attention has been paid to this matter in the Brazilian plants up to now, being these hazards frequently underestimated or even ignored. The main purpose of this work was to propose a systematic methodology for analysis of Pb-210 radioactivity in black powder samples from some Brazilian plants, through the evaluation of direct Pb-210 gamma spectrometry and Bi-210 beta counting technical viabilities. In both cases, one in five samples of black powder analysed showed relevant activity (above 1Bq/kg) of Pb-210, being these results probably related to particular features of each specific plant (production levels, reservoir geochemical profile, etc.), in such a way that a single pattern is not observed. For the proposed methodology, gamma spectrometry proved to be the most reliable technique, showing a 3.5% standard deviation, and, for a 95% confidence level, overall fitness in the range of Pb-210 concentration of activity presented in the standard sample reference sheet, provided by IAEA for intercomparison purposes. In the Brazilian scene, however, the availability of statistically supported evidences is insufficient to allow the potential radiological hazard due to the management of black powder to be discarded. Thus, further research efforts are recommended in order to detect the eventually critical regions or plants where gas exploration, production and processing practices will require a regular program of radiological surveillance, in the near future. (author)

  15. Reducing nurse medicine administration errors.

    Science.gov (United States)

    Ofosu, Rose; Jarrett, Patricia

    Errors in administering medicines are common and can compromise the safety of patients. This review discusses the causes of drug administration error in hospitals by student and registered nurses, and the practical measures educators and hospitals can take to improve nurses' knowledge and skills in medicines management, and reduce drug errors.

  16. Cardiovascular medication errors in children.

    Science.gov (United States)

    Alexander, Diana C; Bundy, David G; Shore, Andrew D; Morlock, Laura; Hicks, Rodney W; Miller, Marlene R

    2009-07-01

    We sought to describe pediatric cardiovascular medication errors and to determine patients and medications with more-frequently reported and/or more-harmful errors. We analyzed cardiovascular medication error reports from 2003-2004 for patients error, no harm; E-I, harmful error). Proportions of harmful reports were determined according to drug class and age group. "High-risk" drugs were defined as antiarrhythmics, antihypertensives, digoxin, and calcium channel blockers. A total of 147 facilities submitted 821 reports with community hospitals predominating (70%). Mean patient age was 4 years (median: 0.9 years). The most common error locations were NICUs, general care units, PICUs, pediatric units, and inpatient pharmacies. Drug administration, particularly improper dosing, was implicated most commonly. Severity analysis showed 5% "near misses," 91% errors without harm, and 4% harmful errors, with no reported fatalities. A total of 893 medications were cited in 821 reports. Diuretics were cited most frequently, followed by antihypertensives, angiotensin inhibitors, beta-adrenergic receptor blockers, digoxin, and calcium channel blockers. Calcium channel blockers, phosphodiesterase inhibitors, antiarrhythmics, and digoxin had the largest proportions of harmful events, although the values were not statistically significantly different from those for other drug classes. Infants medication errors reaching inpatients, in a national, voluntary, error-reporting database. Proportions of harmful errors were not significantly different by age or cardiovascular medication. Most errors were related to medication administration, largely due to improper dosing.

  17. Residual-stress measurements

    Energy Technology Data Exchange (ETDEWEB)

    Ezeilo, A.N.; Webster, G.A. [Imperial College, London (United Kingdom); Webster, P.J. [Salford Univ. (United Kingdom)

    1997-04-01

    Because neutrons can penetrate distances of up to 50 mm in most engineering materials, this makes them unique for establishing residual-stress distributions non-destructively. D1A is particularly suited for through-surface measurements as it does not suffer from instrumental surface aberrations commonly found on multidetector instruments, while D20 is best for fast internal-strain scanning. Two examples for residual-stress measurements in a shot-peened material, and in a weld are presented to demonstrate the attractive features of both instruments. (author).

  18. Dosimetric implications of inter- and intrafractional prostate positioning errors during tomotherapy. Comparison of gold marker-based registrations with native MVCT

    Energy Technology Data Exchange (ETDEWEB)

    Wust, Peter; Joswig, Marc; Graf, Reinhold; Boehmer, Dirk; Beck, Marcus; Barelkowski, Thomasz; Budach, Volker; Ghadjar, Pirus [Charite Universitaetsmedizin Berlin, Department of Radiation Oncology and Radiotherapy, Berlin (Germany)

    2017-09-15

    For high-dose radiation therapy (RT) of prostate cancer, image-guided (IGRT) and intensity-modulated RT (IMRT) approaches are standard. Less is known regarding comparisons of different IGRT techniques and the resulting residual errors, as well as regarding their influences on dose distributions. A total of 58 patients who received tomotherapy-based RT up to 84 Gy for high-risk prostate cancer underwent IGRT based either on daily megavoltage CT (MVCT) alone (n = 43) or the additional use of gold markers (n = 15) under routine conditions. Planned Adaptive (Accuray Inc., Madison, WI, USA) software was used for elaborated offline analysis to quantify residual interfractional prostate positioning errors, along with systematic and random errors and the resulting safety margins after both IGRT approaches. Dosimetric parameters for clinical target volume (CTV) coverage and exposition of organs at risk (OAR) were also analyzed and compared. Interfractional as well as intrafractional displacements were determined. Particularly in the vertical direction, residual interfractional positioning errors were reduced using the gold marker-based approach, but dosimetric differences were moderate and the clinical relevance relatively small. Intrafractional prostate motion proved to be quite high, with displacements of 1-3 mm; however, these did not result in additional dosimetric impairments. Residual interfractional positioning errors were reduced using gold marker-based IGRT; however, this resulted in only slightly different final dose distributions. Therefore, daily MVCT-based IGRT without markers might be a valid alternative. (orig.) [German] Bei der hochdosierten Bestrahlung des Prostatakarzinoms sind die bildgesteuerte (IGRT) und die intensitaetsmodulierte Bestrahlung (IMRT) Standard. Offene Fragen gibt es beim Vergleich von IGRT-Techniken im Hinblick auf residuelle Fehler und Beeinflussungen der Dosisverteilung. Bei 58 Patienten, deren Hochrisiko-Prostatakarzinom am

  19. Residual stress measurement by neutron diffraction

    International Nuclear Information System (INIS)

    Akita, Koichi; Suzuki, Hiroshi

    2010-01-01

    Neutron diffraction method has great advantages, allowing us to determine the residual stress deep present within the bulk materials and components nondestructively. Therefore, the method has been applied to confirm the structural integrity of the actual mechanical components and structures and to improve the manufacturing process and strength reliability of the products. This article reviews the residual stress measurement methodology of neutron diffraction. It also refers to the appropriate treatments of diffraction plane, stress-free lattice spacing, coarse grain and surface error to obtain reliable results. Finally, a few applications are introduced to show the capabilities of the neutron stress measurement method for the studies on the strength and elasto-plastic behaviors of crystalline materials. (author)

  20. Sun drying of residual annatto seed powder

    Directory of Open Access Journals (Sweden)

    Dyego da Costa Santos

    2015-01-01

    Full Text Available Residual annatto seeds are waste from bixin extraction in the food, pharmaceutical and cosmetic industries. Most of this by-product is currently discarded; however, the use of these seeds in human foods through the elaboration of powder added to other commercial powders is seen as a viable option. This study aimed at drying of residual annatto powder, with and without the oil layer derived from the industrial extraction of bixin, fitting different mathematical models to experimental data and calculating the effective moisture diffusivity of the samples. Powder containing oil exhibited the shortest drying time, highest drying rate (≈ 5.0 kg kg-1 min-1 and highest effective diffusivity (6.49 × 10-12 m2 s-1. All mathematical models assessed were a suitable representation of the drying kinetics of powders with and without oil, with R2 above 0.99 and root mean square error values lower than 1.0.

  1. Effect of patient setup errors on simultaneously integrated boost head and neck IMRT treatment plans

    International Nuclear Information System (INIS)

    Siebers, Jeffrey V.; Keall, Paul J.; Wu Qiuwen; Williamson, Jeffrey F.; Schmidt-Ullrich, Rupert K.

    2005-01-01

    Purpose: The purpose of this study is to determine dose delivery errors that could result from random and systematic setup errors for head-and-neck patients treated using the simultaneous integrated boost (SIB)-intensity-modulated radiation therapy (IMRT) technique. Methods and Materials: Twenty-four patients who participated in an intramural Phase I/II parotid-sparing IMRT dose-escalation protocol using the SIB treatment technique had their dose distributions reevaluated to assess the impact of random and systematic setup errors. The dosimetric effect of random setup error was simulated by convolving the two-dimensional fluence distribution of each beam with the random setup error probability density distribution. Random setup errors of σ = 1, 3, and 5 mm were simulated. Systematic setup errors were simulated by randomly shifting the patient isocenter along each of the three Cartesian axes, with each shift selected from a normal distribution. Systematic setup error distributions with Σ = 1.5 and 3.0 mm along each axis were simulated. Combined systematic and random setup errors were simulated for σ = Σ = 1.5 and 3.0 mm along each axis. For each dose calculation, the gross tumor volume (GTV) received by 98% of the volume (D 98 ), clinical target volume (CTV) D 90 , nodes D 90 , cord D 2 , and parotid D 50 and parotid mean dose were evaluated with respect to the plan used for treatment for the structure dose and for an effective planning target volume (PTV) with a 3-mm margin. Results: Simultaneous integrated boost-IMRT head-and-neck treatment plans were found to be less sensitive to random setup errors than to systematic setup errors. For random-only errors, errors exceeded 3% only when the random setup error σ exceeded 3 mm. Simulated systematic setup errors with Σ = 1.5 mm resulted in approximately 10% of plan having more than a 3% dose error, whereas a Σ = 3.0 mm resulted in half of the plans having more than a 3% dose error and 28% with a 5% dose error

  2. A posteriori error estimates for finite volume approximations of elliptic equations on general surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Ju, Lili; Tian, Li; Wang, Desheng

    2008-10-31

    In this paper, we present a residual-based a posteriori error estimate for the finite volume discretization of steady convection– diffusion–reaction equations defined on surfaces in R3, which are often implicitly represented as level sets of smooth functions. Reliability and efficiency of the proposed a posteriori error estimator are rigorously proved. Numerical experiments are also conducted to verify the theoretical results and demonstrate the robustness of the error estimator.

  3. Bayesian Total Error Analysis - An Error Sensitive Approach to Model Calibration

    Science.gov (United States)

    Franks, S. W.; Kavetski, D.; Kuczera, G.

    2002-12-01

    The majority of environmental models require calibration of their parameters before meaningful predictions of catchment behaviour can be made. Despite the importance of reliable parameter estimates, there are growing concerns about the ability of objective-based inference methods to adequately calibrate environmental models. The problem lies with the formulation of the objective or likelihood function, which is currently implemented using essentially ad-hoc methods. We outline limitations of current calibration methodologies and introduce a more systematic Bayesian Total Error Analysis (BATEA) framework for environmental model calibration and validation, which imposes a hitherto missing rigour in environmental modelling by requiring the specification of physically realistic model and data uncertainty models with explicit assumptions that can and must be tested against available evidence. The BATEA formalism enables inference of the hydrological parameters and also of any latent variables of the uncertainty models, e.g., precipitation depth errors. The latter could be useful for improving data sampling and measurement methodologies. In addition, distinguishing between the various sources of errors will reduce the current ambiguity about parameter and predictive uncertainty and enable rational testing of environmental models' hypotheses. Monte Carlo Markov Chain methods are employed to manage the increased computational requirements of BATEA. A case study using synthetic data demonstrates that explicitly accounting for forcing errors leads to immediate advantages over traditional regression (e.g., standard least squares calibration) that ignore rainfall history corruption and pseudo-likelihood methods (e.g., GLUE) do not explicitly characterise data and model errors. It is precisely data and model errors that are responsible for the need for calibration in the first place; we expect that understanding these errors will force fundamental shifts in the model

  4. Origins of coevolution between residues distant in protein 3D structures

    OpenAIRE

    Anishchenko, Ivan; Ovchinnikov, Sergey; Kamisetty, Hetunandan; Baker, David

    2017-01-01

    Coevolution-derived contact predictions are enabling accurate protein structure modeling. However, coevolving residues are not always in contact, and this is a potential source of error in such modeling efforts. To investigate the sources of such errors and, more generally, the origins of coevolution in protein structures, we provide a global overview of the contributions to the “exceptions” to the general rule that coevolving residues are close in protein three-dimensional structures.

  5. Controlling errors in unidosis carts

    Directory of Open Access Journals (Sweden)

    Inmaculada Díaz Fernández

    2010-01-01

    Full Text Available Objective: To identify errors in the unidosis system carts. Method: For two months, the Pharmacy Service controlled medication either returned or missing from the unidosis carts both in the pharmacy and in the wards. Results: Uncorrected unidosis carts show a 0.9% of medication errors (264 versus 0.6% (154 which appeared in unidosis carts previously revised. In carts not revised, the error is 70.83% and mainly caused when setting up unidosis carts. The rest are due to a lack of stock or unavailability (21.6%, errors in the transcription of medical orders (6.81% or that the boxes had not been emptied previously (0.76%. The errors found in the units correspond to errors in the transcription of the treatment (3.46%, non-receipt of the unidosis copy (23.14%, the patient did not take the medication (14.36%or was discharged without medication (12.77%, was not provided by nurses (14.09%, was withdrawn from the stocks of the unit (14.62%, and errors of the pharmacy service (17.56% . Conclusions: It is concluded the need to redress unidosis carts and a computerized prescription system to avoid errors in transcription.Discussion: A high percentage of medication errors is caused by human error. If unidosis carts are overlooked before sent to hospitalization units, the error diminishes to 0.3%.

  6. Prioritising interventions against medication errors

    DEFF Research Database (Denmark)

    Lisby, Marianne; Pape-Larsen, Louise; Sørensen, Ann Lykkegaard

    Abstract Authors: Lisby M, Larsen LP, Soerensen AL, Nielsen LP, Mainz J Title: Prioritising interventions against medication errors – the importance of a definition Objective: To develop and test a restricted definition of medication errors across health care settings in Denmark Methods: Medication...... errors constitute a major quality and safety problem in modern healthcare. However, far from all are clinically important. The prevalence of medication errors ranges from 2-75% indicating a global problem in defining and measuring these [1]. New cut-of levels focusing the clinical impact of medication...... errors are therefore needed. Development of definition: A definition of medication errors including an index of error types for each stage in the medication process was developed from existing terminology and through a modified Delphi-process in 2008. The Delphi panel consisted of 25 interdisciplinary...

  7. Error adaptation in mental arithmetic.

    Science.gov (United States)

    Desmet, Charlotte; Imbo, Ineke; De Brauwer, Jolien; Brass, Marcel; Fias, Wim; Notebaert, Wim

    2012-01-01

    Until now, error and conflict adaptation have been studied extensively using simple laboratory tasks. A common finding is that responses slow down after errors. According to the conflict monitoring theory, performance should also improve after an error. However, this is usually not observed. In this study, we investigated whether the characteristics of the experimental paradigms normally used could explain this absence. More precisely, these paradigms have in common that behavioural adaptation has little room to be expressed. We therefore studied error and conflict adaptation effects in a task that encounters the richness of everyday life's behavioural adaptation--namely, mental arithmetic, where multiple solution strategies are available. In accordance with our hypothesis, we observed that post-error accuracy increases after errors in mental arithmetic. No support for conflict adaptation in mental arithmetic was found. Implications for current theories of conflict and error monitoring are discussed.

  8. Composition of carbonization residues

    Energy Technology Data Exchange (ETDEWEB)

    Hupfer; Leonhardt

    1943-11-27

    This report compared the composition of samples from Wesseling and Leuna. In each case the sample was a residue from carbonization of the residues from hydrogenation of the brown coal processed at the plant. The composition was given in terms of volatile components, fixed carbon, ash, water, carbon, hydrogen, oxygen, nitrogen, volatile sulfur, and total sulfur. The result of carbonization was given in terms of (ash and) coke, tar, water, gas and losses, and bitumen. The composition of the ash was given in terms of silicon dioxide, ferric oxide, aluminum oxide, calcium oxide, magnesium oxide, potassium and sodium oxides, sulfur trioxide, phosphorus pentoxide, chlorine, and titanium oxide. The most important difference between the properties of the two samples was that the residue from Wesseling only contained 4% oil, whereas that from Leuna had about 26% oil. Taking into account the total amount of residue processed yearly, the report noted that better carbonization at Leuna could save 20,000 metric tons/year of oil. Some other comparisons of data included about 33% volatiles at Leuna vs. about 22% at Wesseling, about 5 1/2% sulfur at Leuna vs. about 6 1/2% at Leuna, but about 57% ash for both. Composition of the ash differed quite a bit between the two. 1 table.

  9. Designing with residual materials

    NARCIS (Netherlands)

    Walhout, W.; Wever, R.; Blom, E.; Addink-Dölle, L.; Tempelman, E.

    2013-01-01

    Many entrepreneurial businesses have attempted to create value based on the residual material streams of third parties. Based on ‘waste’ materials they designed products, around which they built their company. Such activities have the potential to yield sustainable products. Many of such companies

  10. Conoscopic holography systematic error processing by means of gaussian filters

    OpenAIRE

    Zapico, Pablo; Patiño, Héctor; Fernández, Pedro; Valiño, Gonzalo; Rico, J.C. (José)

    2018-01-01

    This work analyses the directional effect shown by the point clouds when digitizing with a conoscopic holography (CH) sensor. The asymmetric shape of the laser spot for this sensor yields that directionality appears along the greatest spot length and it occurs repeatedly under different working conditions. To study this effect of the sensor, several tests were performed on a surface machined by EDM with a very uniform and isotropic finish, so that the directional effect should not appear actu...

  11. Consequences of leaf calibration errors on IMRT delivery

    International Nuclear Information System (INIS)

    Sastre-Padro, M; Welleweerd, J; Malinen, E; Eilertsen, K; Olsen, D R; Heide, U A van der

    2007-01-01

    IMRT treatments using multi-leaf collimators may involve a large number of segments in order to spare the organs at risk. When a large proportion of these segments are small, leaf positioning errors may become relevant and have therapeutic consequences. The performance of four head and neck IMRT treatments under eight different cases of leaf positioning errors has been studied. Systematic leaf pair offset errors in the range of ±2.0 mm were introduced, thus modifying the segment sizes of the original IMRT plans. Thirty-six films were irradiated with the original and modified segments. The dose difference and the gamma index (with 2%/2 mm criteria) were used for evaluating the discrepancies between the irradiated films. The median dose differences were linearly related to the simulated leaf pair errors. In the worst case, a 2.0 mm error generated a median dose difference of 1.5%. Following the gamma analysis, two out of the 32 modified plans were not acceptable. In conclusion, small systematic leaf bank positioning errors have a measurable impact on the delivered dose and may have consequences for the therapeutic outcome of IMRT

  12. Hospital medication errors in a pharmacovigilance system in Colombia

    Directory of Open Access Journals (Sweden)

    Jorge Enrique Machado-Alba

    2015-11-01

    Full Text Available Objective: this study analyzes the medication errors reported to a pharmacovigilance system by 26 hospitals for patients in the healthcare system of Colombia. Methods: this retrospective study analyzed the medication errors reported to a systematized database between 1 January 2008 and 12 September 2013. The medication is dispensed by the company Audifarma S.A. to hospitals and clinics around Colombia. Data were classified according to the taxonomy of the National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP. The data analysis was performed using SPSS 22.0 for Windows, considering p-values < 0.05 significant. Results: there were 9 062 medication errors in 45 hospital pharmacies. Real errors accounted for 51.9% (n = 4 707, of which 12.0% (n = 567 reached the patient (Categories C to I and caused harm (Categories E to I to 17 subjects (0.36%. The main process involved in errors that occurred (categories B to I was prescription (n = 1 758, 37.3%, followed by dispensation (n = 1 737, 36.9%, transcription (n = 970, 20.6% and administration (n = 242, 5.1%. The errors in the administration process were 45.2 times more likely to reach the patient (CI 95%: 20.2–100.9. Conclusions: medication error reporting systems and prevention strategies should be widespread in hospital settings, prioritizing efforts to address the administration process.

  13. Software defect prevention based on human error theories

    Directory of Open Access Journals (Sweden)

    Fuqun HUANG

    2017-06-01

    Full Text Available Software defect prevention is an important way to reduce the defect introduction rate. As the primary cause of software defects, human error can be the key to understanding and preventing software defects. This paper proposes a defect prevention approach based on human error mechanisms: DPeHE. The approach includes both knowledge and regulation training in human error prevention. Knowledge training provides programmers with explicit knowledge on why programmers commit errors, what kinds of errors tend to be committed under different circumstances, and how these errors can be prevented. Regulation training further helps programmers to promote the awareness and ability to prevent human errors through practice. The practice is facilitated by a problem solving checklist and a root cause identification checklist. This paper provides a systematic framework that integrates knowledge across disciplines, e.g., cognitive science, software psychology and software engineering to defend against human errors in software development. Furthermore, we applied this approach in an international company at CMM Level 5 and a software development institution at CMM Level 1 in the Chinese Aviation Industry. The application cases show that the approach is feasible and effective in promoting developers’ ability to prevent software defects, independent of process maturity levels.

  14. The probability and the management of human error

    Energy Technology Data Exchange (ETDEWEB)

    Dufey, R.B. [Atomic Energy of Canada Limited, Chalk River Laboratories, Chalk River, ON (Canada); Saull, J.W. [International Federation of Airworthiness, Sussex (United Kingdom)

    2004-07-01

    Embedded within modern technological systems, human error is the largest, and indeed dominant contributor to accident cause. The consequences dominate the risk profiles for nuclear power and for many other technologies. We need to quantify the probability of human error for the system as an integral contribution within the overall system failure, as it is generally not separable or predictable for actual events. We also need to provide a means to manage and effectively reduce the failure (error) rate. The fact that humans learn from their mistakes allows a new determination of the dynamic probability and human failure (error) rate in technological systems. The result is consistent with and derived from the available world data for modern technological systems. Comparisons are made to actual data from large technological systems and recent catastrophes. Best estimate values and relationships can be derived for both the human error rate, and for the probability. We describe the potential for new approaches to the management of human error and safety indicators, based on the principles of error state exclusion and of the systematic effect of learning. A new equation is given for the probability of human error ({lambda}) that combines the influences of early inexperience, learning from experience ({epsilon}) and stochastic occurrences with having a finite minimum rate, this equation is {lambda} 5.10{sup -5} + ((1/{epsilon}) - 5.10{sup -5}) exp(-3*{epsilon}). The future failure rate is entirely determined by the experience: thus the past defines the future.

  15. Error estimation for goal-oriented spatial adaptivity for the SN equations on triangular meshes

    International Nuclear Information System (INIS)

    Lathouwers, D.

    2011-01-01

    In this paper we investigate different error estimation procedures for use within a goal oriented adaptive algorithm for the S N equations on unstructured meshes. The method is based on a dual-weighted residual approach where an appropriate adjoint problem is formulated and solved in order to obtain the importance of residual errors in the forward problem on the specific goal of interest. The forward residuals and the adjoint function are combined to obtain both economical finite element meshes tailored to the solution of the target functional as well as providing error estimates. Various approximations made to make the calculation of the adjoint angular flux more economically attractive are evaluated by comparing the performance of the resulting adaptive algorithm and the quality of the error estimators when applied to two shielding-type test problems. (author)

  16. Error analysis and system optimization of non-null aspheric testing system

    Science.gov (United States)

    Luo, Yongjie; Yang, Yongying; Liu, Dong; Tian, Chao; Zhuo, Yongmo

    2010-10-01

    A non-null aspheric testing system, which employs partial null lens (PNL for short) and reverse iterative optimization reconstruction (ROR for short) technique, is proposed in this paper. Based on system modeling in ray tracing software, the parameter of each optical element is optimized and this makes system modeling more precise. Systematic error of non-null aspheric testing system is analyzed and can be categorized into two types, the error due to surface parameters of PNL in the system modeling and the rest from non-null interferometer by the approach of error storage subtraction. Experimental results show that, after systematic error is removed from testing result of non-null aspheric testing system, the aspheric surface is precisely reconstructed by ROR technique and the consideration of systematic error greatly increase the test accuracy of non-null aspheric testing system.

  17. Residual gauge invariance of Hamiltonian lattice gauge theories

    International Nuclear Information System (INIS)

    Ryang, S.; Saito, T.; Shigemoto, K.

    1984-01-01

    The time-independent residual gauge invariance of Hamiltonian lattice gauge theories is considered. Eigenvalues and eigenfunctions of the unperturbed Hamiltonian are found in terms of Gegengauer's polynomials. Physical states which satisfy the subsidiary condition corresponding to Gauss' law are constructed systematically. (orig.)

  18. Errors in abdominal computed tomography

    International Nuclear Information System (INIS)

    Stephens, S.; Marting, I.; Dixon, A.K.

    1989-01-01

    Sixty-nine patients are presented in whom a substantial error was made on the initial abdominal computed tomography report. Certain features of these errors have been analysed. In 30 (43.5%) a lesion was simply not recognised (error of observation); in 39 (56.5%) the wrong conclusions were drawn about the nature of normal or abnormal structures (error of interpretation). The 39 errors of interpretation were more complex; in 7 patients an abnormal structure was noted but interpreted as normal, whereas in four a normal structure was thought to represent a lesion. Other interpretive errors included those where the wrong cause for a lesion had been ascribed (24 patients), and those where the abnormality was substantially under-reported (4 patients). Various features of these errors are presented and discussed. Errors were made just as often in relation to small and large lesions. Consultants made as many errors as senior registrar radiologists. It is like that dual reporting is the best method of avoiding such errors and, indeed, this is widely practised in our unit. (Author). 9 refs.; 5 figs.; 1 tab

  19. A Geomagnetic Reference Error Model

    Science.gov (United States)

    Maus, S.; Woods, A. J.; Nair, M. C.

    2011-12-01

    The accuracy of geomagnetic field models, such as the International Geomagnetic Reference Field (IGRF) and the World Magnetic Model (WMM), has benefitted tremendously from the ongoing series of satellite magnetic missions. However, what do we mean by accuracy? When comparing a geomagnetic reference model with a magnetic field measurement (for example of an electronic compass), three contributions play a role: (1) The instrument error, which is not subject of this discussion, (2) the error of commission, namely the error of the model coefficients themselves in representing the geomagnetic main field, and (3) the error of omission, comprising contributions to the geomagnetic field which are not represented in the reference model. The latter can further be subdivided into the omission of the crustal field and the omission of the disturbance field. Several factors have a strong influence on these errors: The error of commission primarily depends on the time elapsed since the last update of the reference model. The omission error for the crustal field depends on altitude of the measurement, while the omission error for the disturbance field has a strong latitudinal dependence, peaking under the auroral electrojets. A further complication arises for the uncertainty in magnetic declination, which is directly dependent on the strength of the horizontal field. Here, we present an error model which takes all of these factors into account. This error model will be implemented as an online-calculator, providing the uncertainty of the magnetic elements at the entered location and time.

  20. Sepsis: Medical errors in Poland.

    Science.gov (United States)

    Rorat, Marta; Jurek, Tomasz

    2016-01-01

    Health, safety and medical errors are currently the subject of worldwide discussion. The authors analysed medico-legal opinions trying to determine types of medical errors and their impact on the course of sepsis. The authors carried out a retrospective analysis of 66 medico-legal opinions issued by the Wroclaw Department of Forensic Medicine between 2004 and 2013 (at the request of the prosecutor or court) in cases examined for medical errors. Medical errors were confirmed in 55 of the 66 medico-legal opinions. The age of victims varied from 2 weeks to 68 years; 49 patients died. The analysis revealed medical errors committed by 113 health-care workers: 98 physicians, 8 nurses and 8 emergency medical dispatchers. In 33 cases, an error was made before hospitalisation. Hospital errors occurred in 35 victims. Diagnostic errors were discovered in 50 patients, including 46 cases of sepsis being incorrectly recognised and insufficient diagnoses in 37 cases. Therapeutic errors occurred in 37 victims, organisational errors in 9 and technical errors in 2. In addition to sepsis, 8 patients also had a severe concomitant disease and 8 had a chronic disease. In 45 cases, the authors observed glaring errors, which could incur criminal liability. There is an urgent need to introduce a system for reporting and analysing medical errors in Poland. The development and popularisation of standards for identifying and treating sepsis across basic medical professions is essential to improve patient safety and survival rates. Procedures should be introduced to prevent health-care workers from administering incorrect treatment in cases. © The Author(s) 2015.

  1. Contour Error Map Algorithm

    Science.gov (United States)

    Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John

    2005-01-01

    The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.

  2. Residual stresses in material processing

    Science.gov (United States)

    Kozaczek, K. J.; Watkins, T. R.; Hubbard, C. R.; Wang, Xun-Li; Spooner, S.

    Material manufacturing processes often introduce residual stresses into the product. The residual stresses affect the properties of the material and often are detrimental. Therefore, the distribution and magnitude of residual stresses in the final product are usually an important factor in manufacturing process optimization or component life prediction. The present paper briefly discusses the causes of residual stresses. It then addresses the direct, nondestructive methods of residual stress measurement by X ray and neutron diffraction. Examples are presented to demonstrate the importance of residual stress measurement in machining and joining operations.

  3. Real-time GPS Satellite Clock Error Prediction Based On No-stationary Time Series Model

    Science.gov (United States)

    Wang, Q.; Xu, G.; Wang, F.

    2009-04-01

    Analysis Centers of the IGS provide precise satellite ephemeris for GPS data post-processing. The accuracy of orbit products is better than 5cm, and that of the satellite clock errors (SCE) approaches 0.1ns (igscb.jpl.nasa.gov), which can meet with the requirements of precise point positioning (PPP). Due to the 13 day-latency of the IGS final products, only the broadcast ephemeris and IGS ultra rapid products (predicted) are applicable for real time PPP (RT-PPP). Therefore, development of an approach to estimate high precise GPS SCE in real time is of particular importance for RT-PPP. Many studies have been carried out for forecasting the corrections using models, such as Linear Model (LM), Quadratic Polynomial Model (QPM), Quadratic Polynomial Model with Cyclic corrected Terms (QPM+CT), Grey Model (GM) and Kalman Filter Model (KFM), etc. However, the precisions of these models are generally in nanosecond level. The purpose of this study is to develop a method using which SCE forecasting for RT-PPP can be reached with a precision of sub-nanosecond. Analysis of the last 8 years IGS SCE data shown that predicted precision depend on the stability of the individual satellite clock. The clocks of the most recent GPS satellites (BLOCK IIR and BLOCK IIR-M) are more stable than that of the former GPS satellites (BLOCK IIA). For the stable satellite clock, the next 6 hours SCE can be easily predict with LM. The residuals of unstable satellite clocks are periodic ones with noise components. Dominant periods of residuals are found by using Fourier Transform and Spectrum Analysis. For the rest part of the residuals, an auto-regression model is used to determine their systematic trends. Summarized from this study, a no-stationary time series model can be proposed to predict GPS SCE in real time. This prediction model includes: linear term, cyclic corrected terms and auto-regression term, which are used to represent SCE trend, cyclic parts and rest of the errors, respectively

  4. National Aeronautics and Space Administration "threat and error" model applied to pediatric cardiac surgery: error cycles precede ∼85% of patient deaths.

    Science.gov (United States)

    Hickey, Edward J; Nosikova, Yaroslavna; Pham-Hung, Eric; Gritti, Michael; Schwartz, Steven; Caldarone, Christopher A; Redington, Andrew; Van Arsdell, Glen S

    2015-02-01

    We hypothesized that the National Aeronautics and Space Administration "threat and error" model (which is derived from analyzing >30,000 commercial flights, and explains >90% of crashes) is directly applicable to pediatric cardiac surgery. We implemented a unit-wide performance initiative, whereby every surgical admission constitutes a "flight" and is tracked in real time, with the aim of identifying errors. The first 500 consecutive patients (524 flights) were analyzed, with an emphasis on the relationship between error cycles and permanent harmful outcomes. Among 524 patient flights (risk adjustment for congenital heart surgery category: 1-6; median: 2) 68 (13%) involved residual hemodynamic lesions, 13 (2.5%) permanent end-organ injuries, and 7 deaths (1.3%). Preoperatively, 763 threats were identified in 379 (72%) flights. Only 51% of patient flights (267) were error free. In the remaining 257 flights, 430 errors occurred, most commonly related to proficiency (280; 65%) or judgment (69, 16%). In most flights with errors (173 of 257; 67%), an unintended clinical state resulted, ie, the error was consequential. In 60% of consequential errors (n = 110; 21% of total), subsequent cycles of additional error/unintended states occurred. Cycles, particularly those containing multiple errors, were very significantly associated with permanent harmful end-states, including residual hemodynamic lesions (P < .0001), end-organ injury (P < .0001), and death (P < .0001). Deaths were almost always preceded by cycles (6 of 7; P < .0001). Human error, if not mitigated, often leads to cycles of error and unintended patient states, which are dangerous and precede the majority of harmful outcomes. Efforts to manage threats and error cycles (through crew resource management techniques) are likely to yield large increases in patient safety. Copyright © 2015. Published by Elsevier Inc.

  5. Processor register error correction management

    Science.gov (United States)

    Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.

    2016-12-27

    Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.

  6. SRC Residual fuel oils

    Science.gov (United States)

    Tewari, Krishna C.; Foster, Edward P.

    1985-01-01

    Coal solids (SRC) and distillate oils are combined to afford single-phase blends of residual oils which have utility as fuel oils substitutes. The components are combined on the basis of their respective polarities, that is, on the basis of their heteroatom content, to assure complete solubilization of SRC. The resulting composition is a fuel oil blend which retains its stability and homogeneity over the long term.

  7. Image pre-filtering for measurement error reduction in digital image correlation

    Science.gov (United States)

    Zhou, Yihao; Sun, Chen; Song, Yuntao; Chen, Jubing

    2015-02-01

    In digital image correlation, the sub-pixel intensity interpolation causes a systematic error in the measured displacements. The error increases toward high-frequency component of the speckle pattern. In practice, a captured image is usually corrupted by additive white noise. The noise introduces additional energy in the high frequencies and therefore raises the systematic error. Meanwhile, the noise also elevates the random error which increases with the noise power. In order to reduce the systematic error and the random error of the measurements, we apply a pre-filtering to the images prior to the correlation so that the high-frequency contents are suppressed. Two spatial-domain filters (binomial and Gaussian) and two frequency-domain filters (Butterworth and Wiener) are tested on speckle images undergoing both simulated and real-world translations. By evaluating the errors of the various combinations of speckle patterns, interpolators, noise levels, and filter configurations, we come to the following conclusions. All the four filters are able to reduce the systematic error. Meanwhile, the random error can also be reduced if the signal power is mainly distributed around DC. For high-frequency speckle patterns, the low-pass filters (binomial, Gaussian and Butterworth) slightly increase the random error and Butterworth filter produces the lowest random error among them. By using Wiener filter with over-estimated noise power, the random error can be reduced but the resultant systematic error is higher than that of low-pass filters. In general, Butterworth filter is recommended for error reduction due to its flexibility of passband selection and maximal preservation of the allowed frequencies. Binomial filter enables efficient implementation and thus becomes a good option if computational cost is a critical issue. While used together with pre-filtering, B-spline interpolator produces lower systematic error than bicubic interpolator and similar level of the random

  8. Error analysis of the microradiographical determination of mineral content in mineralised tissue slices

    International Nuclear Information System (INIS)

    Jong, E. de J. de; Bosch, J.J. ten

    1985-01-01

    The microradiographic method, used to measure the mineral content in slices of mineralised tissues as a function of position, is analysed. The total error in the measured mineral content is split into systematic errors per microradiogram and random noise errors. These errors are measured quantitatively. Predominant contributions to systematic errors appear to be x-ray beam inhomogeneity, the determination of the step wedge thickness and stray light in the densitometer microscope, while noise errors are under the influence of the choice of film, the value of the optical film transmission of the microradiographic image and the area of the densitometer window. Optimisation criteria are given. The authors used these criteria, together with the requirement that the method be fast and easy to build an optimised microradiographic system. (author)

  9. Composition of carbonization residues

    Energy Technology Data Exchange (ETDEWEB)

    Hupfer; Leonhardt

    1943-11-30

    This report gave a record of the composition of several samples of residues from carbonization of various hydrogenation residue from processing some type of coal or tar in the Bergius process. These included Silesian bituminous coal processed at 600 atm. with iron catalyst, in one case to produce gasoline and middle oil and in another case to produce heavy oil excess, Scholven coal processed at 250 atm. with tin oxalate and chlorine catalyst, Bruex tar processed in a 10-liter oven using iron catalyst, and a pitch mixture from Welheim processed in a 10-liter over using iron catalyst. The values gathered were compared with a few corresponding values estimated for Boehlen tar and Gelsenberg coal based on several assumptions outlined in the report. The data recorded included percentage of ash in the dry residue and percentage of carbon, hydrogen, oxygen, nitrogen, chlorine, total sulfur, and volatile sulfur. The percentage of ash varied from 21.43% in the case of Bruex tar to 53.15% in the case of one of the Silesian coals. Percentage of carbon varied from 44.0% in the case of Scholven coal to 78.03% in the case of Bruex tar. Percentage of total sulfur varied from 2.28% for Bruex tar to a recorded 5.65% for one of the Silesian coals and an estimated 6% for Boehlen tar. 1 table.

  10. Identification and characterization of hydrophobic gate residues in TRP channels.

    Science.gov (United States)

    Zheng, Wang; Hu, Ruikun; Cai, Ruiqi; Hofmann, Laura; Hu, Qiaolin; Fatehi, Mohammad; Long, Wentong; Kong, Tim; Tang, Jingfeng; Light, Peter; Flockerzi, Veit; Cao, Ying; Chen, Xing-Zhen

    2018-02-01

    Transient receptor potential (TRP) channels, subdivided into 6 subfamilies in mammals, have essential roles in sensory physiology. They respond to remarkably diverse stimuli, comprising thermal, chemical, and mechanical modalities, through opening or closing of channel gates. In this study, we systematically substituted the hydrophobic residues within the distal fragment of pore-lining helix S6 with hydrophilic residues and, based on Xenopus oocyte and mammalian cell electrophysiology and a hydrophobic gate theory, identified hydrophobic gates in TRPV6/V5/V4/C4/M8. We found that channel activity drastically increased when TRPV6 Ala616 or Met617 or TRPV5 Ala576 or Met577 , but not any of their adjacent residues, was substituted with hydrophilic residues. Channel activity strongly correlated with the hydrophilicity of the residues at those sites, suggesting that consecutive hydrophobic residues TRPV6 Ala616-Met617 and TRPV5 Ala576-Met577 form a double-residue gate in each channel. By the same strategy, we identified a hydrophobic single-residue gate in TRPV4 Iso715 , TRPC4 Iso617 , and TRPM8 Val976 . In support of the hydrophobic gate theory, hydrophilic substitution at the gate site, which removes the hydrophobic gate seal, substantially increased the activity of TRP channels in low-activity states but had little effect on the function of activated channels. The double-residue gate channels were more sensitive to small changes in the gate's hydrophobicity or size than single-residue gate channels. The unconventional double-reside gating mechanism in TRP channels may have been evolved to respond especially to physiologic stimuli that trigger relatively small gate conformational changes.-Zheng, W., Hu, R., Cai, R., Hofmann, L., Hu, Q., Fatehi, M., Long, W., Kong, T., Tang, J., Light, P., Flockerzi, V., Cao, Y., Chen, X.-Z. Identification and characterization of hydrophobic gate residues in TRP channels.

  11. Error estimation for pattern recognition

    CERN Document Server

    Braga Neto, U

    2015-01-01

    This book is the first of its kind to discuss error estimation with a model-based approach. From the basics of classifiers and error estimators to more specialized classifiers, it covers important topics and essential issues pertaining to the scientific validity of pattern classification. Additional features of the book include: * The latest results on the accuracy of error estimation * Performance analysis of resubstitution, cross-validation, and bootstrap error estimators using analytical and simulation approaches * Highly interactive computer-based exercises and end-of-chapter problems

  12. Heuristic errors in clinical reasoning.

    Science.gov (United States)

    Rylander, Melanie; Guerrasio, Jeannette

    2016-08-01

    Errors in clinical reasoning contribute to patient morbidity and mortality. The purpose of this study was to determine the types of heuristic errors made by third-year medical students and first-year residents. This study surveyed approximately 150 clinical educators inquiring about the types of heuristic errors they observed in third-year medical students and first-year residents. Anchoring and premature closure were the two most common errors observed amongst third-year medical students and first-year residents. There was no difference in the types of errors observed in the two groups. Errors in clinical reasoning contribute to patient morbidity and mortality Clinical educators perceived that both third-year medical students and first-year residents committed similar heuristic errors, implying that additional medical knowledge and clinical experience do not affect the types of heuristic errors made. Further work is needed to help identify methods that can be used to reduce heuristic errors early in a clinician's education. © 2015 John Wiley & Sons Ltd.

  13. Medication Errors in Outpatient Pediatrics.

    Science.gov (United States)

    Berrier, Kyla

    2016-01-01

    Medication errors may occur during parental administration of prescription and over-the-counter medications in the outpatient pediatric setting. Misinterpretation of medication labels and dosing errors are two types of errors in medication administration. Health literacy may play an important role in parents' ability to safely manage their child's medication regimen. There are several proposed strategies for decreasing these medication administration errors, including using standardized dosing instruments, using strictly metric units for medication dosing, and providing parents and caregivers with picture-based dosing instructions. Pediatric healthcare providers should be aware of these strategies and seek to implement many of them into their practices.

  14. [DIAGNOSTIC ERRORS IN INTERNAL MEDICINE].

    Science.gov (United States)

    Schattner, Ami

    2017-02-01

    Diagnostic errors remain an important target in improving the quality of care and achieving better health outcomes. With a relatively steady rate estimated at 10-15% in many settings, research aiming to elucidate mechanisms of error is highly important. Results indicate that not only cognitive mistakes but a number of factors acting together often culminate in a diagnostic error. Far from being 'unpreventable', several methods and techniques are suggested that may show promise in minimizing diagnostic errors. These measures should be further investigated and incorporated into all phases of medical education.

  15. Identifying Error in AUV Communication

    National Research Council Canada - National Science Library

    Coleman, Joseph; Merrill, Kaylani; O'Rourke, Michael; Rajala, Andrew G; Edwards, Dean B

    2006-01-01

    Mine Countermeasures (MCM) involving Autonomous Underwater Vehicles (AUVs) are especially susceptible to error, given the constraints on underwater acoustic communication and the inconstancy of the underwater communication channel...

  16. Effects of Measurement Error on the Output Gap in Japan

    OpenAIRE

    Koichiro Kamada; Kazuto Masuda

    2000-01-01

    Potential output is the largest amount of products that can be produced by fully utilizing available labor and capital stock; the output gap is defined as the discrepancy between actual and potential output. If data on production factors contain measurement errors, total factor productivity (TFP) cannot be estimated accurately from the Solow residual(i.e., the portion of output that is not attributable to labor and capital inputs). This may give rise to distortions in the estimation of potent...

  17. A Hybrid Unequal Error Protection / Unequal Error Resilience ...

    African Journals Online (AJOL)

    The quality layers are then assigned an Unequal Error Resilience to synchronization loss by unequally allocating the number of headers available for synchronization to them. Following that Unequal Error Protection against channel noise is provided to the layers by the use of Rate Compatible Punctured Convolutional ...

  18. Measurement Error and Equating Error in Power Analysis

    Science.gov (United States)

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  19. Principais parâmetros biológicos avaliados em erros na fase pré-analítica de laboratórios clínicos: revisão sistemática Main biological parameters evaluated in pre-analytical phase errors at clinical laboratories: a systematic review

    Directory of Open Access Journals (Sweden)

    Vivaldo Gomes da Costa

    2012-06-01

    comprises three phases: pre-analytical, analytical and post-analytical. Most errors occur in the pre-analytical phase. Thus, their determination and corresponding assessment maximize QAP efficiency. In this study, by means of a systematic review, which comprised 14 articles, we describe the main biological variations found in the pre-analytical phase at clinical laboratories. The biological parameters described in the review included glucose, cholesterol, triglycerides, enzymes and hormones. As far as venipuncture is concerned, a common error was the prolonged use of tourniquet. The main error causes were the following: storage time, tourniquet time, phlebotomy techniques, insufficient information to patients, incorrect blood/anticoagulant ratio, inadequate tubes, contaminated samples, medication and interlaboratory alterations. Our results corroborated other studies, although we did not find other investigations that specifically evaluated changes in the pre-analytical phase due to the use of medication. The most assessed biological parameters coincided with clinical tests. Accordingly, both the implementation of an efficient QAP and the development of professional awareness may prevent laboratory inaccuracies.

  20. Analysis of translational errors in frame-based and frameless cranial radiosurgery using an anthropomorphic phantom

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Taynna Vernalha Rocha [Faculdades Pequeno Principe (FPP), Curitiba, PR (Brazil); Cordova Junior, Arno Lotar; Almeida, Cristiane Maria; Piedade, Pedro Argolo; Silva, Cintia Mara da, E-mail: taynnavra@gmail.com [Centro de Radioterapia Sao Sebastiao, Florianopolis, SC (Brazil); Brincas, Gabriela R. Baseggio [Centro de Diagnostico Medico Imagem, Florianopolis, SC (Brazil); Marins, Priscila; Soboll, Danyel Scheidegger [Universidade Tecnologica Federal do Parana (UTFPR), Curitiba, PR (Brazil)

    2016-03-15

    Objective: To evaluate three-dimensional translational setup errors and residual errors in image-guided radiosurgery, comparing frameless and frame-based techniques, using an anthropomorphic phantom. Materials and Methods: We initially used specific phantoms for the calibration and quality control of the image-guided system. For the hidden target test, we used an Alderson Radiation Therapy (ART)-210 anthropomorphic head phantom, into which we inserted four 5- mm metal balls to simulate target treatment volumes. Computed tomography images were the taken with the head phantom properly positioned for frameless and frame-based radiosurgery. Results: For the frameless technique, the mean error magnitude was 0.22 ± 0.04 mm for setup errors and 0.14 ± 0.02 mm for residual errors, the combined uncertainty being 0.28 mm and 0.16 mm, respectively. For the frame-based technique, the mean error magnitude was 0.73 ± 0.14 mm for setup errors and 0.31 ± 0.04 mm for residual errors, the combined uncertainty being 1.15 mm and 0.63 mm, respectively. Conclusion: The mean values, standard deviations, and combined uncertainties showed no evidence of a significant differences between the two techniques when the head phantom ART-210 was used. (author)