WorldWideScience

Sample records for residual systematic error

  1. Pencil kernel correction and residual error estimation for quality-index-based dose calculations

    International Nuclear Information System (INIS)

    Nyholm, Tufve; Olofsson, Joergen; Ahnesjoe, Anders; Georg, Dietmar; Karlsson, Mikael

    2006-01-01

    Experimental data from 593 photon beams were used to quantify the errors in dose calculations using a previously published pencil kernel model. A correction of the kernel was derived in order to remove the observed systematic errors. The remaining residual error for individual beams was modelled through uncertainty associated with the kernel model. The methods were tested against an independent set of measurements. No significant systematic error was observed in the calculations using the derived correction of the kernel and the remaining random errors were found to be adequately predicted by the proposed method

  2. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  3. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  4. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  5. Dosimetric Implications of Residual Tracking Errors During Robotic SBRT of Liver Metastases

    Energy Technology Data Exchange (ETDEWEB)

    Chan, Mark [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Kiel (Germany); Tuen Mun Hospital, Hong Kong (China); Grehn, Melanie [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Lübeck (Germany); Institute for Robotics and Cognitive Systems, University of Lübeck, Lübeck (Germany); Cremers, Florian [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Lübeck (Germany); Siebert, Frank-Andre [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Kiel (Germany); Wurster, Stefan [Saphir Radiosurgery Center Northern Germany, Güstrow (Germany); Department for Radiation Oncology, University Medicine Greifswald, Greifswald (Germany); Huttenlocher, Stefan [Saphir Radiosurgery Center Northern Germany, Güstrow (Germany); Dunst, Jürgen [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Kiel (Germany); Department for Radiation Oncology, University Clinic Copenhagen, Copenhagen (Denmark); Hildebrandt, Guido [Department for Radiation Oncology, University Medicine Rostock, Rostock (Germany); Schweikard, Achim [Institute for Robotics and Cognitive Systems, University of Lübeck, Lübeck (Germany); Rades, Dirk [Department for Radiation Oncology, University Medical Center Schleswig-Holstein, Lübeck (Germany); Ernst, Floris [Institute for Robotics and Cognitive Systems, University of Lübeck, Lübeck (Germany); and others

    2017-03-15

    Purpose: Although the metric precision of robotic stereotactic body radiation therapy in the presence of breathing motion is widely known, we investigated the dosimetric implications of breathing phase–related residual tracking errors. Methods and Materials: In 24 patients (28 liver metastases) treated with the CyberKnife, we recorded the residual correlation, prediction, and rotational tracking errors from 90 fractions and binned them into 10 breathing phases. The average breathing phase errors were used to shift and rotate the clinical tumor volume (CTV) and planning target volume (PTV) for each phase to calculate a pseudo 4-dimensional error dose distribution for comparison with the original planned dose distribution. Results: The median systematic directional correlation, prediction, and absolute aggregate rotation errors were 0.3 mm (range, 0.1-1.3 mm), 0.01 mm (range, 0.00-0.05 mm), and 1.5° (range, 0.4°-2.7°), respectively. Dosimetrically, 44%, 81%, and 92% of all voxels differed by less than 1%, 3%, and 5% of the planned local dose, respectively. The median coverage reduction for the PTV was 1.1% (range in coverage difference, −7.8% to +0.8%), significantly depending on correlation (P=.026) and rotational (P=.005) error. With a 3-mm PTV margin, the median coverage change for the CTV was 0.0% (range, −1.0% to +5.4%), not significantly depending on any investigated parameter. In 42% of patients, the 3-mm margin did not fully compensate for the residual tracking errors, resulting in a CTV coverage reduction of 0.1% to 1.0%. Conclusions: For liver tumors treated with robotic stereotactic body radiation therapy, a safety margin of 3 mm is not always sufficient to cover all residual tracking errors. Dosimetrically, this translates into only small CTV coverage reductions.

  6. Dosimetric Implications of Residual Tracking Errors During Robotic SBRT of Liver Metastases

    International Nuclear Information System (INIS)

    Chan, Mark; Grehn, Melanie; Cremers, Florian; Siebert, Frank-Andre; Wurster, Stefan; Huttenlocher, Stefan; Dunst, Jürgen; Hildebrandt, Guido; Schweikard, Achim; Rades, Dirk; Ernst, Floris

    2017-01-01

    Purpose: Although the metric precision of robotic stereotactic body radiation therapy in the presence of breathing motion is widely known, we investigated the dosimetric implications of breathing phase–related residual tracking errors. Methods and Materials: In 24 patients (28 liver metastases) treated with the CyberKnife, we recorded the residual correlation, prediction, and rotational tracking errors from 90 fractions and binned them into 10 breathing phases. The average breathing phase errors were used to shift and rotate the clinical tumor volume (CTV) and planning target volume (PTV) for each phase to calculate a pseudo 4-dimensional error dose distribution for comparison with the original planned dose distribution. Results: The median systematic directional correlation, prediction, and absolute aggregate rotation errors were 0.3 mm (range, 0.1-1.3 mm), 0.01 mm (range, 0.00-0.05 mm), and 1.5° (range, 0.4°-2.7°), respectively. Dosimetrically, 44%, 81%, and 92% of all voxels differed by less than 1%, 3%, and 5% of the planned local dose, respectively. The median coverage reduction for the PTV was 1.1% (range in coverage difference, −7.8% to +0.8%), significantly depending on correlation (P=.026) and rotational (P=.005) error. With a 3-mm PTV margin, the median coverage change for the CTV was 0.0% (range, −1.0% to +5.4%), not significantly depending on any investigated parameter. In 42% of patients, the 3-mm margin did not fully compensate for the residual tracking errors, resulting in a CTV coverage reduction of 0.1% to 1.0%. Conclusions: For liver tumors treated with robotic stereotactic body radiation therapy, a safety margin of 3 mm is not always sufficient to cover all residual tracking errors. Dosimetrically, this translates into only small CTV coverage reductions.

  7. On the effect of systematic errors in near real time accountancy

    International Nuclear Information System (INIS)

    Avenhaus, R.

    1987-01-01

    Systematic measurement errors have a decisive impact on nuclear materials accountancy. This has been demonstrated at various occasions for a fixed number of inventory periods, i.e. for situations where the overall probability of detection is taken as the measure of effectiveness. In the framework of Near Real Time Accountancy (NRTA), however, such analyses have not yet been performed. In this paper sequential test procedures are considered which are based on the so-called MUF-Residuals. It is shown that, if the decision maker does not know the systematic error variance, the average run lengths tend towards infinity if this variance is equal or longer than that of the random error. Furthermore, if the decision maker knows this invariance, the average run length for constant loss or diversion is not shorter than that without loss or diversion. These results cast some doubt on the present practice of data evaluation where systematic errors are tacitly assumed to persist for an infinite time. In fact, information about the time dependence of the variances of these errors has to be gathered in order that the efficiency of NRTA evaluation methods can be estimated realistically

  8. A new method for weakening the combined effect of residual errors on multibeam bathymetric data

    Science.gov (United States)

    Zhao, Jianhu; Yan, Jun; Zhang, Hongmei; Zhang, Yuqing; Wang, Aixue

    2014-12-01

    Multibeam bathymetric system (MBS) has been widely applied in the marine surveying for providing high-resolution seabed topography. However, some factors degrade the precision of bathymetry, including the sound velocity, the vessel attitude, the misalignment angle of the transducer and so on. Although these factors have been corrected strictly in bathymetric data processing, the final bathymetric result is still affected by their residual errors. In deep water, the result usually cannot meet the requirements of high-precision seabed topography. The combined effect of these residual errors is systematic, and it's difficult to separate and weaken the effect using traditional single-error correction methods. Therefore, the paper puts forward a new method for weakening the effect of residual errors based on the frequency-spectrum characteristics of seabed topography and multibeam bathymetric data. Four steps, namely the separation of the low-frequency and the high-frequency part of bathymetric data, the reconstruction of the trend of actual seabed topography, the merging of the actual trend and the extracted microtopography, and the accuracy evaluation, are involved in the method. Experiment results prove that the proposed method could weaken the combined effect of residual errors on multibeam bathymetric data and efficiently improve the accuracy of the final post-processing results. We suggest that the method should be widely applied to MBS data processing in deep water.

  9. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Residual rotational set-up errors after daily cone-beam CT image guided radiotherapy of locally advanced cervical cancer

    International Nuclear Information System (INIS)

    Laursen, Louise Vagner; Elstrøm, Ulrik Vindelev; Vestergaard, Anne; Muren, Ludvig P.; Petersen, Jørgen Baltzer; Lindegaard, Jacob Christian; Grau, Cai; Tanderup, Kari

    2012-01-01

    Purpose: Due to the often quite extended treatment fields in cervical cancer radiotherapy, uncorrected rotational set-up errors result in a potential risk of target miss. This study reports on the residual rotational set-up error after using daily cone beam computed tomography (CBCT) to position cervical cancer patients for radiotherapy treatment. Methods and materials: Twenty-five patients with locally advanced cervical cancer had daily CBCT scans (650 CBCTs in total) prior to treatment delivery. We retrospectively analyzed the translational shifts made in the clinic prior to each treatment fraction as well as the residual rotational errors remaining after translational correction. Results: The CBCT-guided couch movement resulted in a mean translational 3D vector correction of 7.4 mm. Residual rotational error resulted in a target shift exceeding 5 mm in 57 of the 650 treatment fractions. Three patients alone accounted for 30 of these fractions. Nine patients had no shifts exceeding 5 mm and 13 patients had 5 or less treatment fractions with such shifts. Conclusion: Twenty-two of the 25 patients have none or few treatment fractions with target shifts larger than 5 mm due to residual rotational error. However, three patients display a significant number of shifts suggesting a more systematic set-up error.

  11. Random and Systematic Errors Share in Total Error of Probes for CNC Machine Tools

    Directory of Open Access Journals (Sweden)

    Adam Wozniak

    2018-03-01

    Full Text Available Probes for CNC machine tools, as every measurement device, have accuracy limited by random errors and by systematic errors. Random errors of these probes are described by a parameter called unidirectional repeatability. Manufacturers of probes for CNC machine tools usually specify only this parameter, while parameters describing systematic errors of the probes, such as pre-travel variation or triggering radius variation, are used rarely. Systematic errors of the probes, linked to the differences in pre-travel values for different measurement directions, can be corrected or compensated, but it is not a widely used procedure. In this paper, the share of systematic errors and random errors in total error of exemplary probes are determined. In the case of simple, kinematic probes, systematic errors are much greater than random errors, so compensation would significantly reduce the probing error. Moreover, it shows that in the case of kinematic probes commonly specified unidirectional repeatability is significantly better than 2D performance. However, in the case of more precise strain-gauge probe systematic errors are of the same order as random errors, which means that errors correction or compensation, in this case, would not yield any significant benefits.

  12. Evaluation of Data with Systematic Errors

    International Nuclear Information System (INIS)

    Froehner, F. H.

    2003-01-01

    Application-oriented evaluated nuclear data libraries such as ENDF and JEFF contain not only recommended values but also uncertainty information in the form of 'covariance' or 'error files'. These can neither be constructed nor utilized properly without a thorough understanding of uncertainties and correlations. It is shown how incomplete information about errors is described by multivariate probability distributions or, more summarily, by covariance matrices, and how correlations are caused by incompletely known common errors. Parameter estimation for the practically most important case of the Gaussian distribution with common errors is developed in close analogy to the more familiar case without. The formalism shows that, contrary to widespread belief, common ('systematic') and uncorrelated ('random' or 'statistical') errors are to be added in quadrature. It also shows explicitly that repetition of a measurement reduces mainly the statistical uncertainties but not the systematic ones. While statistical uncertainties are readily estimated from the scatter of repeatedly measured data, systematic uncertainties can only be inferred from prior information about common errors and their propagation. The optimal way to handle error-affected auxiliary quantities ('nuisance parameters') in data fitting and parameter estimation is to adjust them on the same footing as the parameters of interest and to integrate (marginalize) them out of the joint posterior distribution afterward

  13. Corrective Techniques and Future Directions for Treatment of Residual Refractive Error Following Cataract Surgery

    Science.gov (United States)

    Moshirfar, Majid; McCaughey, Michael V; Santiago-Caban, Luis

    2015-01-01

    Postoperative residual refractive error following cataract surgery is not an uncommon occurrence for a large proportion of modern-day patients. Residual refractive errors can be broadly classified into 3 main categories: myopic, hyperopic, and astigmatic. The degree to which a residual refractive error adversely affects a patient is dependent on the magnitude of the error, as well as the specific type of intraocular lens the patient possesses. There are a variety of strategies for resolving residual refractive errors that must be individualized for each specific patient scenario. In this review, the authors discuss contemporary methods for rectification of residual refractive error, along with their respective indications/contraindications, and efficacies. PMID:25663845

  14. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-Area Sky Surveys

    Energy Technology Data Exchange (ETDEWEB)

    Li, T. S. [et al.

    2016-05-27

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is stable in time and uniform over the sky to 1% precision or better. Past surveys have achieved photometric precision of 1-2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors using photometry from the Dark Energy Survey (DES) as an example. We define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes, when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the systematic chromatic errors caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane, can be up to 2% in some bandpasses. We compare the calculated systematic chromatic errors with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput. The residual after correction is less than 0.3%. We also find that the errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.

  15. Systematic sampling with errors in sample locations

    DEFF Research Database (Denmark)

    Ziegel, Johanna; Baddeley, Adrian; Dorph-Petersen, Karl-Anton

    2010-01-01

    analysis using point process methods. We then analyze three different models for the error process, calculate exact expressions for the variances, and derive asymptotic variances. Errors in the placement of sample points can lead to substantial inflation of the variance, dampening of zitterbewegung......Systematic sampling of points in continuous space is widely used in microscopy and spatial surveys. Classical theory provides asymptotic expressions for the variance of estimators based on systematic sampling as the grid spacing decreases. However, the classical theory assumes that the sample grid...... is exactly periodic; real physical sampling procedures may introduce errors in the placement of the sample points. This paper studies the effect of errors in sample positioning on the variance of estimators in the case of one-dimensional systematic sampling. First we sketch a general approach to variance...

  16. Tropical systematic and random error energetics based on NCEP ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    Systematic error growth rate peak is observed at wavenumber 2 up to 4-day forecast then .... the influence of summer systematic error and ran- ... total exchange. When the error energy budgets are examined in spectral domain, one may ask ques- tions on the error growth at a certain wavenum- ber from its interaction with ...

  17. Systematic errors in VLF direction-finding of whistler ducts

    International Nuclear Information System (INIS)

    Strangeways, H.J.; Rycroft, M.J.

    1980-01-01

    In the previous paper it was shown that the systematic error in the azimuthal bearing due to multipath propagation and incident wave polarisation (when this also constitutes an error) was given by only three different forms for all VLF direction-finders currently used to investigate the position of whistler ducts. In this paper the magnitude of this error is investigated for different ionospheric and ground parameters for these three different systematic error types. By incorporating an ionosphere for which the refractive index is given by the full Appleton-Hartree formula, the variation of the systematic error with ionospheric electron density and latitude and direction of propagation is investigated in addition to the variation with wave frequency, ground conductivity and dielectric constant and distance of propagation. The systematic bearing error is also investigated for the three methods when the azimuthal bearing is averaged over a 2 kHz bandwidth. This is found to lead to a significantly smaller bearing error which, for the crossed-loops goniometer, approximates the bearing error calculated when phase-dependent terms in the receiver response are ignored. (author)

  18. Investigation of systematic errors of metastable "atomic pair" number

    CERN Document Server

    Yazkov, V

    2015-01-01

    Sources of systematic errors in analysis of data, collected in 2012, are analysed. Esti- mations of systematic errors in a number of “atomic pairs” fr om metastable π + π − atoms are presented.

  19. Impact of residual and intrafractional errors on strategy of correction for image-guided accelerated partial breast irradiation

    Directory of Open Access Journals (Sweden)

    Guo Xiao-Mao

    2010-10-01

    Full Text Available Abstract Background The cone beam CT (CBCT guided radiation can reduce the systematic and random setup errors as compared to the skin-mark setup. However, the residual and intrafractional (RAIF errors are still unknown. The purpose of this paper is to investigate the magnitude of RAIF errors and correction action levels needed in cone beam computed tomography (CBCT guided accelerated partial breast irradiation (APBI. Methods Ten patients were enrolled in the prospective study of CBCT guided APBI. The postoperative tumor bed was irradiated with 38.5 Gy in 10 fractions over 5 days. Two cone-beam CT data sets were obtained with one before and one after the treatment delivery. The CBCT images were registered online to the planning CT images using the automatic algorithm followed by a fine manual adjustment. An action level of 3 mm, meaning that corrections were performed for translations exceeding 3 mm, was implemented in clinical treatments. Based on the acquired data, different correction action levels were simulated, and random RAIF errors, systematic RAIF errors and related margins before and after the treatments were determined for varying correction action levels. Results A total of 75 pairs of CBCT data sets were analyzed. The systematic and random setup errors based on skin-mark setup prior to treatment delivery were 2.1 mm and 1.8 mm in the lateral (LR, 3.1 mm and 2.3 mm in the superior-inferior (SI, and 2.3 mm and 2.0 mm in the anterior-posterior (AP directions. With the 3 mm correction action level, the systematic and random RAIF errors were 2.5 mm and 2.3 mm in the LR direction, 2.3 mm and 2.3 mm in the SI direction, and 2.3 mm and 2.2 mm in the AP direction after treatments delivery. Accordingly, the margins for correction action levels of 3 mm, 4 mm, 5 mm, 6 mm and no correction were 7.9 mm, 8.0 mm, 8.0 mm, 7.9 mm and 8.0 mm in the LR direction; 6.4 mm, 7.1 mm, 7.9 mm, 9.2 mm and 10.5 mm in the SI direction; 7.6 mm, 7.9 mm, 9.4 mm, 10

  20. Impact of residual and intrafractional errors on strategy of correction for image-guided accelerated partial breast irradiation

    International Nuclear Information System (INIS)

    Cai, Gang; Hu, Wei-Gang; Chen, Jia-Yi; Yu, Xiao-Li; Pan, Zi-Qiang; Yang, Zhao-Zhi; Guo, Xiao-Mao; Shao, Zhi-Min; Jiang, Guo-Liang

    2010-01-01

    The cone beam CT (CBCT) guided radiation can reduce the systematic and random setup errors as compared to the skin-mark setup. However, the residual and intrafractional (RAIF) errors are still unknown. The purpose of this paper is to investigate the magnitude of RAIF errors and correction action levels needed in cone beam computed tomography (CBCT) guided accelerated partial breast irradiation (APBI). Ten patients were enrolled in the prospective study of CBCT guided APBI. The postoperative tumor bed was irradiated with 38.5 Gy in 10 fractions over 5 days. Two cone-beam CT data sets were obtained with one before and one after the treatment delivery. The CBCT images were registered online to the planning CT images using the automatic algorithm followed by a fine manual adjustment. An action level of 3 mm, meaning that corrections were performed for translations exceeding 3 mm, was implemented in clinical treatments. Based on the acquired data, different correction action levels were simulated, and random RAIF errors, systematic RAIF errors and related margins before and after the treatments were determined for varying correction action levels. A total of 75 pairs of CBCT data sets were analyzed. The systematic and random setup errors based on skin-mark setup prior to treatment delivery were 2.1 mm and 1.8 mm in the lateral (LR), 3.1 mm and 2.3 mm in the superior-inferior (SI), and 2.3 mm and 2.0 mm in the anterior-posterior (AP) directions. With the 3 mm correction action level, the systematic and random RAIF errors were 2.5 mm and 2.3 mm in the LR direction, 2.3 mm and 2.3 mm in the SI direction, and 2.3 mm and 2.2 mm in the AP direction after treatments delivery. Accordingly, the margins for correction action levels of 3 mm, 4 mm, 5 mm, 6 mm and no correction were 7.9 mm, 8.0 mm, 8.0 mm, 7.9 mm and 8.0 mm in the LR direction; 6.4 mm, 7.1 mm, 7.9 mm, 9.2 mm and 10.5 mm in the SI direction; 7.6 mm, 7.9 mm, 9.4 mm, 10.1 mm and 12.7 mm in the AP direction

  1. Differential Effects of Visual-Acoustic Biofeedback Intervention for Residual Speech Errors

    Science.gov (United States)

    McAllister Byun, Tara; Campbell, Heather

    2016-01-01

    Recent evidence suggests that the incorporation of visual biofeedback technologies may enhance response to treatment in individuals with residual speech errors. However, there is a need for controlled research systematically comparing biofeedback versus non-biofeedback intervention approaches. This study implemented a single-subject experimental design with a crossover component to investigate the relative efficacy of visual-acoustic biofeedback and traditional articulatory treatment for residual rhotic errors. Eleven child/adolescent participants received ten sessions of visual-acoustic biofeedback and 10 sessions of traditional treatment, with the order of biofeedback and traditional phases counterbalanced across participants. Probe measures eliciting untreated rhotic words were administered in at least three sessions prior to the start of treatment (baseline), between the two treatment phases (midpoint), and after treatment ended (maintenance), as well as before and after each treatment session. Perceptual accuracy of rhotic production was assessed by outside listeners in a blinded, randomized fashion. Results were analyzed using a combination of visual inspection of treatment trajectories, individual effect sizes, and logistic mixed-effects regression. Effect sizes and visual inspection revealed that participants could be divided into categories of strong responders (n = 4), mixed/moderate responders (n = 3), and non-responders (n = 4). Individual results did not reveal a reliable pattern of stronger performance in biofeedback versus traditional blocks, or vice versa. Moreover, biofeedback versus traditional treatment was not a significant predictor of accuracy in the logistic mixed-effects model examining all within-treatment word probes. However, the interaction between treatment condition and treatment order was significant: biofeedback was more effective than traditional treatment in the first phase of treatment, and traditional treatment was more effective

  2. ASSESSMENT OF SYSTEMATIC CHROMATIC ERRORS THAT IMPACT SUB-1% PHOTOMETRIC PRECISION IN LARGE-AREA SKY SURVEYS

    Energy Technology Data Exchange (ETDEWEB)

    Li, T. S.; DePoy, D. L.; Marshall, J. L.; Boada, S.; Mondrik, N.; Nagasawa, D. [George P. and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy, and Department of Physics and Astronomy, Texas A and M University, College Station, TX 77843 (United States); Tucker, D.; Annis, J.; Finley, D. A.; Kent, S.; Lin, H.; Marriner, J.; Wester, W. [Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, IL 60510 (United States); Kessler, R.; Scolnic, D. [Kavli Institute for Cosmological Physics, University of Chicago, Chicago, IL 60637 (United States); Bernstein, G. M. [Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA 19104 (United States); Burke, D. L.; Rykoff, E. S. [SLAC National Accelerator Laboratory, Menlo Park, CA 94025 (United States); James, D. J.; Walker, A. R. [Cerro Tololo Inter-American Observatory, National Optical Astronomy Observatory, Casilla 603, La Serena (Chile); Collaboration: DES Collaboration; and others

    2016-06-01

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey’s stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. The residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for

  3. Systematic Error Study for ALICE charged-jet v2 Measurement

    Energy Technology Data Exchange (ETDEWEB)

    Heinz, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Soltz, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-07-18

    We study the treatment of systematic errors in the determination of v2 for charged jets in √ sNN = 2:76 TeV Pb-Pb collisions by the ALICE Collaboration. Working with the reported values and errors for the 0-5% centrality data we evaluate the Χ2 according to the formulas given for the statistical and systematic errors, where the latter are separated into correlated and shape contributions. We reproduce both the Χ2 and p-values relative to a null (zero) result. We then re-cast the systematic errors into an equivalent co-variance matrix and obtain identical results, demonstrating that the two methods are equivalent.

  4. A procedure for the significance testing of unmodeled errors in GNSS observations

    Science.gov (United States)

    Li, Bofeng; Zhang, Zhetao; Shen, Yunzhong; Yang, Ling

    2018-01-01

    It is a crucial task to establish a precise mathematical model for global navigation satellite system (GNSS) observations in precise positioning. Due to the spatiotemporal complexity of, and limited knowledge on, systematic errors in GNSS observations, some residual systematic errors would inevitably remain even after corrected with empirical model and parameterization. These residual systematic errors are referred to as unmodeled errors. However, most of the existing studies mainly focus on handling the systematic errors that can be properly modeled and then simply ignore the unmodeled errors that may actually exist. To further improve the accuracy and reliability of GNSS applications, such unmodeled errors must be handled especially when they are significant. Therefore, a very first question is how to statistically validate the significance of unmodeled errors. In this research, we will propose a procedure to examine the significance of these unmodeled errors by the combined use of the hypothesis tests. With this testing procedure, three components of unmodeled errors, i.e., the nonstationary signal, stationary signal and white noise, are identified. The procedure is tested by using simulated data and real BeiDou datasets with varying error sources. The results show that the unmodeled errors can be discriminated by our procedure with approximately 90% confidence. The efficiency of the proposed procedure is further reassured by applying the time-domain Allan variance analysis and frequency-domain fast Fourier transform. In summary, the spatiotemporally correlated unmodeled errors are commonly existent in GNSS observations and mainly governed by the residual atmospheric biases and multipath. Their patterns may also be impacted by the receiver.

  5. Assessment of the uncertainty associated with systematic errors in digital instruments: an experimental study on offset errors

    International Nuclear Information System (INIS)

    Attivissimo, F; Giaquinto, N; Savino, M; Cataldo, A

    2012-01-01

    This paper deals with the assessment of the uncertainty due to systematic errors, particularly in A/D conversion-based instruments. The problem of defining and assessing systematic errors is briefly discussed, and the conceptual scheme of gauge repeatability and reproducibility is adopted. A practical example regarding the evaluation of the uncertainty caused by the systematic offset error is presented. The experimental results, obtained under various ambient conditions, show that modelling the variability of systematic errors is more problematic than suggested by the ISO 5725 norm. Additionally, the paper demonstrates the substantial difference between the type B uncertainty evaluation, obtained via the maximum entropy principle applied to manufacturer's specifications, and the type A (experimental) uncertainty evaluation, which reflects actually observable reality. Although it is reasonable to assume a uniform distribution of the offset error, experiments demonstrate that the distribution is not centred and that a correction must be applied. In such a context, this work motivates a more pragmatic and experimental approach to uncertainty, with respect to the directions of supplement 1 of GUM. (paper)

  6. Rigid Residue Scan Simulations Systematically Reveal Residue Entropic Roles in Protein Allostery.

    Directory of Open Access Journals (Sweden)

    Robert Kalescky

    2016-04-01

    Full Text Available Intra-protein information is transmitted over distances via allosteric processes. This ubiquitous protein process allows for protein function changes due to ligand binding events. Understanding protein allostery is essential to understanding protein functions. In this study, allostery in the second PDZ domain (PDZ2 in the human PTP1E protein is examined as model system to advance a recently developed rigid residue scan method combining with configurational entropy calculation and principal component analysis. The contributions from individual residues to whole-protein dynamics and allostery were systematically assessed via rigid body simulations of both unbound and ligand-bound states of the protein. The entropic contributions of individual residues to whole-protein dynamics were evaluated based on covariance-based correlation analysis of all simulations. The changes of overall protein entropy when individual residues being held rigid support that the rigidity/flexibility equilibrium in protein structure is governed by the La Châtelier's principle of chemical equilibrium. Key residues of PDZ2 allostery were identified with good agreement with NMR studies of the same protein bound to the same peptide. On the other hand, the change of entropic contribution from each residue upon perturbation revealed intrinsic differences among all the residues. The quasi-harmonic and principal component analyses of simulations without rigid residue perturbation showed a coherent allosteric mode from unbound and bound states, respectively. The projection of simulations with rigid residue perturbation onto coherent allosteric modes demonstrated the intrinsic shifting of ensemble distributions supporting the population-shift theory of protein allostery. Overall, the study presented here provides a robust and systematic approach to estimate the contribution of individual residue internal motion to overall protein dynamics and allostery.

  7. Practical guidance on representing the heteroscedasticity of residual errors of hydrological predictions

    Science.gov (United States)

    McInerney, David; Thyer, Mark; Kavetski, Dmitri; Kuczera, George

    2016-04-01

    Appropriate representation of residual errors in hydrological modelling is essential for accurate and reliable probabilistic streamflow predictions. In particular, residual errors of hydrological predictions are often heteroscedastic, with large errors associated with high runoff events. Although multiple approaches exist for representing this heteroscedasticity, few if any studies have undertaken a comprehensive evaluation and comparison of these approaches. This study fills this research gap by evaluating a range of approaches for representing heteroscedasticity in residual errors. These approaches include the 'direct' weighted least squares approach and 'transformational' approaches, such as logarithmic, Box-Cox (with and without fitting the transformation parameter), logsinh and the inverse transformation. The study reports (1) theoretical comparison of heteroscedasticity approaches, (2) empirical evaluation of heteroscedasticity approaches using a range of multiple catchments / hydrological models / performance metrics and (3) interpretation of empirical results using theory to provide practical guidance on the selection of heteroscedasticity approaches. Importantly, for hydrological practitioners, the results will simplify the choice of approaches to represent heteroscedasticity. This will enhance their ability to provide hydrological probabilistic predictions with the best reliability and precision for different catchment types (e.g. high/low degree of ephemerality).

  8. Study of systematic errors in the luminosity measurement

    International Nuclear Information System (INIS)

    Arima, Tatsumi

    1993-01-01

    The experimental systematic error in the barrel region was estimated to be 0.44 %. This value is derived considering the systematic uncertainties from the dominant sources but does not include uncertainties which are being studied. In the end cap region, the study of shower behavior and clustering effect is under way in order to determine the angular resolution at the low angle edge of the Liquid Argon Calorimeter. We also expect that the systematic error in this region will be less than 1 %. The technical precision of theoretical uncertainty is better than 0.1 % comparing the Tobimatsu-Shimizu program and BABAMC modified by ALEPH. To estimate the physical uncertainty we will use the ALIBABA [9] which includes O(α 2 ) QED correction in leading-log approximation. (J.P.N.)

  9. Study of systematic errors in the luminosity measurement

    Energy Technology Data Exchange (ETDEWEB)

    Arima, Tatsumi [Tsukuba Univ., Ibaraki (Japan). Inst. of Applied Physics

    1993-04-01

    The experimental systematic error in the barrel region was estimated to be 0.44 %. This value is derived considering the systematic uncertainties from the dominant sources but does not include uncertainties which are being studied. In the end cap region, the study of shower behavior and clustering effect is under way in order to determine the angular resolution at the low angle edge of the Liquid Argon Calorimeter. We also expect that the systematic error in this region will be less than 1 %. The technical precision of theoretical uncertainty is better than 0.1 % comparing the Tobimatsu-Shimizu program and BABAMC modified by ALEPH. To estimate the physical uncertainty we will use the ALIBABA [9] which includes O({alpha}{sup 2}) QED correction in leading-log approximation. (J.P.N.).

  10. Error evaluation method for material accountancy measurement. Evaluation of random and systematic errors based on material accountancy data

    International Nuclear Information System (INIS)

    Nidaira, Kazuo

    2008-01-01

    International Target Values (ITV) shows random and systematic measurement uncertainty components as a reference for routinely achievable measurement quality in the accountancy measurement. The measurement uncertainty, called error henceforth, needs to be periodically evaluated and checked against ITV for consistency as the error varies according to measurement methods, instruments, operators, certified reference samples, frequency of calibration, and so on. In the paper an error evaluation method was developed with focuses on (1) Specifying clearly error calculation model, (2) Getting always positive random and systematic error variances, (3) Obtaining probability density distribution of an error variance and (4) Confirming the evaluation method by simulation. In addition the method was demonstrated by applying real data. (author)

  11. Systematic Errors in Dimensional X-ray Computed Tomography

    DEFF Research Database (Denmark)

    that it is possible to compensate them. In dimensional X-ray computed tomography (CT), many physical quantities influence the final result. However, it is important to know which factors in CT measurements potentially lead to systematic errors. In this talk, typical error sources in dimensional X-ray CT are discussed...

  12. Systematic errors of EIT systems determined by easily-scalable resistive phantoms.

    Science.gov (United States)

    Hahn, G; Just, A; Dittmar, J; Hellige, G

    2008-06-01

    We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design.

  13. Systematic errors of EIT systems determined by easily-scalable resistive phantoms

    International Nuclear Information System (INIS)

    Hahn, G; Just, A; Dittmar, J; Hellige, G

    2008-01-01

    We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design

  14. Sources of variability and systematic error in mouse timing behavior.

    Science.gov (United States)

    Gallistel, C R; King, Adam; McDonald, Robert

    2004-01-01

    In the peak procedure, starts and stops in responding bracket the target time at which food is expected. The variability in start and stop times is proportional to the target time (scalar variability), as is the systematic error in the mean center (scalar error). The authors investigated the source of the error and the variability, using head poking in the mouse, with target intervals of 5 s, 15 s, and 45 s, in the standard procedure, and in a variant with 3 different target intervals at 3 different locations in a single trial. The authors conclude that the systematic error is due to the asymmetric location of start and stop decision criteria, and the scalar variability derives primarily from sources other than memory.

  15. Assessment of residual error for online cone-beam CT-guided treatment of prostate cancer patients

    International Nuclear Information System (INIS)

    Letourneau, Daniel; Martinez, Alvaro A.; Lockman, David; Yan Di; Vargas, Carlos; Ivaldi, Giovanni; Wong, John

    2005-01-01

    Purpose: Kilovoltage cone-beam CT (CBCT) implemented on board a medical accelerator is available for image-guidance applications in our clinic. The objective of this work was to assess the magnitude and stability of the residual setup error associated with CBCT online-guided prostate cancer patient setup. Residual error pertains to the uncertainty in image registration, the limited mechanical accuracy, and the intrafraction motion during imaging and treatment. Methods and Materials: The residual error for CBCT online-guided correction was first determined in a phantom study. After online correction, the phantom residual error was determined by comparing megavoltage portal images acquired every 90 deg. to the corresponding digitally reconstructed radiographs. In the clinical study, 8 prostate cancer patients were implanted with three radiopaque markers made of high-winding coils. After positioning the patient using the skin marks, a CBCT scan was acquired and the setup error determined by fusing the coils on the CBCT and planning CT scans. The patient setup was then corrected by moving the couch accordingly. A second CBCT scan was acquired immediately after the correction to evaluate the residual target setup error. Intrafraction motion was evaluated by tracking the coils and the bony landmarks on kilovoltage radiographs acquired every 30 s between the two CBCT scans. Corrections based on soft-tissue registration were evaluated offline by aligning the prostate contours defined on both planning CT and CBCT images. Results: For ideal rigid phantoms, CBCT image-guided treatment can usually achieve setup accuracy of 1 mm or better. For the patients, after CBCT correction, the target setup error was reduced in almost all cases and was generally within ±1.5 mm. The image guidance process took 23-35 min, dictated by the computer speed and network configuration. The contribution of the intrafraction motion to the residual setup error was small, with a standard deviation of

  16. SHERPA: A systematic human error reduction and prediction approach

    International Nuclear Information System (INIS)

    Embrey, D.E.

    1986-01-01

    This paper describes a Systematic Human Error Reduction and Prediction Approach (SHERPA) which is intended to provide guidelines for human error reduction and quantification in a wide range of human-machine systems. The approach utilizes as its basic current cognitive models of human performance. The first module in SHERPA performs task and human error analyses, which identify likely error modes, together with guidelines for the reduction of these errors by training, procedures and equipment redesign. The second module uses a SARAH approach to quantify the probability of occurrence of the errors identified earlier, and provides cost benefit analyses to assist in choosing the appropriate error reduction approaches in the third module

  17. The quality of systematic reviews about interventions for refractive error can be improved: a review of systematic reviews.

    Science.gov (United States)

    Mayo-Wilson, Evan; Ng, Sueko Matsumura; Chuck, Roy S; Li, Tianjing

    2017-09-05

    Systematic reviews should inform American Academy of Ophthalmology (AAO) Preferred Practice Pattern® (PPP) guidelines. The quality of systematic reviews related to the forthcoming Preferred Practice Pattern® guideline (PPP) Refractive Errors & Refractive Surgery is unknown. We sought to identify reliable systematic reviews to assist the AAO Refractive Errors & Refractive Surgery PPP. Systematic reviews were eligible if they evaluated the effectiveness or safety of interventions included in the 2012 PPP Refractive Errors & Refractive Surgery. To identify potentially eligible systematic reviews, we searched the Cochrane Eyes and Vision United States Satellite database of systematic reviews. Two authors identified eligible reviews and abstracted information about the characteristics and quality of the reviews independently using the Systematic Review Data Repository. We classified systematic reviews as "reliable" when they (1) defined criteria for the selection of studies, (2) conducted comprehensive literature searches for eligible studies, (3) assessed the methodological quality (risk of bias) of the included studies, (4) used appropriate methods for meta-analyses (which we assessed only when meta-analyses were reported), (5) presented conclusions that were supported by the evidence provided in the review. We identified 124 systematic reviews related to refractive error; 39 met our eligibility criteria, of which we classified 11 to be reliable. Systematic reviews classified as unreliable did not define the criteria for selecting studies (5; 13%), did not assess methodological rigor (10; 26%), did not conduct comprehensive searches (17; 44%), or used inappropriate quantitative methods (3; 8%). The 11 reliable reviews were published between 2002 and 2016. They included 0 to 23 studies (median = 9) and analyzed 0 to 4696 participants (median = 666). Seven reliable reviews (64%) assessed surgical interventions. Most systematic reviews of interventions for

  18. Correcting systematic errors in high-sensitivity deuteron polarization measurements

    Science.gov (United States)

    Brantjes, N. P. M.; Dzordzhadze, V.; Gebel, R.; Gonnella, F.; Gray, F. E.; van der Hoek, D. J.; Imig, A.; Kruithof, W. L.; Lazarus, D. M.; Lehrach, A.; Lorentz, B.; Messi, R.; Moricciani, D.; Morse, W. M.; Noid, G. A.; Onderwater, C. J. G.; Özben, C. S.; Prasuhn, D.; Levi Sandri, P.; Semertzidis, Y. K.; da Silva e Silva, M.; Stephenson, E. J.; Stockhorst, H.; Venanzoni, G.; Versolato, O. O.

    2012-02-01

    This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Jülich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10 -5 for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10 -6 in a search for an electric dipole moment using a storage ring.

  19. Correcting systematic errors in high-sensitivity deuteron polarization measurements

    Energy Technology Data Exchange (ETDEWEB)

    Brantjes, N.P.M. [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Dzordzhadze, V. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Gebel, R. [Institut fuer Kernphysik, Juelich Center for Hadron Physics, Forschungszentrum Juelich, D-52425 Juelich (Germany); Gonnella, F. [Physica Department of ' Tor Vergata' University, Rome (Italy); INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Gray, F.E. [Regis University, Denver, CO 80221 (United States); Hoek, D.J. van der [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Imig, A. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Kruithof, W.L. [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Lazarus, D.M. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Lehrach, A.; Lorentz, B. [Institut fuer Kernphysik, Juelich Center for Hadron Physics, Forschungszentrum Juelich, D-52425 Juelich (Germany); Messi, R. [Physica Department of ' Tor Vergata' University, Rome (Italy); INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Moricciani, D. [INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Morse, W.M. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Noid, G.A. [Indiana University Cyclotron Facility, Bloomington, IN 47408 (United States); and others

    2012-02-01

    This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Juelich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10{sup -5} for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10{sup -6} in a search for an electric dipole moment using a storage ring.

  20. Seeing your error alters my pointing: observing systematic pointing errors induces sensori-motor after-effects.

    Directory of Open Access Journals (Sweden)

    Roberta Ronchi

    Full Text Available During the procedure of prism adaptation, subjects execute pointing movements to visual targets under a lateral optical displacement: as consequence of the discrepancy between visual and proprioceptive inputs, their visuo-motor activity is characterized by pointing errors. The perception of such final errors triggers error-correction processes that eventually result into sensori-motor compensation, opposite to the prismatic displacement (i.e., after-effects. Here we tested whether the mere observation of erroneous pointing movements, similar to those executed during prism adaptation, is sufficient to produce adaptation-like after-effects. Neurotypical participants observed, from a first-person perspective, the examiner's arm making incorrect pointing movements that systematically overshot visual targets location to the right, thus simulating a rightward optical deviation. Three classical after-effect measures (proprioceptive, visual and visual-proprioceptive shift were recorded before and after first-person's perspective observation of pointing errors. Results showed that mere visual exposure to an arm that systematically points on the right-side of a target (i.e., without error correction produces a leftward after-effect, which mostly affects the observer's proprioceptive estimation of her body midline. In addition, being exposed to such a constant visual error induced in the observer the illusion "to feel" the seen movement. These findings indicate that it is possible to elicit sensori-motor after-effects by mere observation of movement errors.

  1. Saccades to remembered target locations: an analysis of systematic and variable errors.

    Science.gov (United States)

    White, J M; Sparks, D L; Stanford, T R

    1994-01-01

    We studied the effects of varying delay interval on the accuracy and velocity of saccades to the remembered locations of visual targets. Remembered saccades were less accurate than control saccades. Both systematic and variable errors contributed to the loss of accuracy. Systematic errors were similar in size for delay intervals ranging from 400 msec to 5.6 sec, but variable errors increased monotonically as delay intervals were lengthened. Compared to control saccades, remembered saccades were slower and the peak velocities were more variable. However, neither peak velocity nor variability in peak velocity was related to the duration of the delay interval. Our findings indicate that a memory-related process is not the major source of the systematic errors observed on memory trials.

  2. Auto-calibration of Systematic Odometry Errors in Mobile Robots

    DEFF Research Database (Denmark)

    Bak, Martin; Larsen, Thomas Dall; Andersen, Nils Axel

    1999-01-01

    This paper describes the phenomenon of systematic errors in odometry models in mobile robots and looks at various ways of avoiding it by means of auto-calibration. The systematic errors considered are incorrect knowledge of the wheel base and the gains from encoder readings to wheel displacement....... By auto-calibration we mean a standardized procedure which estimates the uncertainties using only on-board equipment such as encoders, an absolute measurement system and filters; no intervention by operator or off-line data processing is necessary. Results are illustrated by a number of simulations...... and experiments on a mobile robot....

  3. MANAGEMENT OF RESIDUAL REFRACTIVE ERROR AFTER CATARACT PHACOEMULSIFICATION. PART 2. INTRAOCULAR APPROACHES

    Directory of Open Access Journals (Sweden)

    K. B. Pershin

    2017-01-01

    Full Text Available The review presents an  analysis  of the  literature  data  on  the  methods of surgical  correction of residual  refractive  error after cataract phacoemulsification. Keratorefractive and intraocular approaches are  considered in details.  A comparison of the  efficacy and  safet y  of different groups   of methods on  the  example  of comparative studies is given.  Historically earlier  keratorefractive methods (laser  vision correction with LASIK and  PRK techniques on intact  eyes,  LASIK after  implantation  of multifocal  IOLs and arcuate keratotomy  after  phaco  are  indicated  for  the  correction of astigmatic refractive  error and  a small  spherical refractive error. Intraocular methods, including the  replacement of the  IOL  and  «piggyback» IOLs implantation  are  used  to  correct a large spherical refractive error. The introduction  of new  technology, the  implantation  of light-adjustable  IOLs, will  expand  the  existing evidence  and provide greater predictabilit y and efficiency of the  method  of correction of residual  refractive error.

  4. SYSTEMATIC ERROR REDUCTION: NON-TILTED REFERENCE BEAM METHOD FOR LONG TRACE PROFILER

    International Nuclear Information System (INIS)

    QIAN, S.; QIAN, K.; HONG, Y.; SENG, L.; HO, T.; TAKACS, P.

    2007-01-01

    Systematic error in the Long Trace Profiler (LTP) has become the major error source as measurement accuracy enters the nanoradian and nanometer regime. Great efforts have been made to reduce the systematic error at a number of synchrotron radiation laboratories around the world. Generally, the LTP reference beam has to be tilted away from the optical axis in order to avoid fringe overlap between the sample and reference beams. However, a tilted reference beam will result in considerable systematic error due to optical system imperfections, which is difficult to correct. Six methods of implementing a non-tilted reference beam in the LTP are introduced: (1) application of an external precision angle device to measure and remove slide pitch error without a reference beam, (2) independent slide pitch test by use of not tilted reference beam, (3) non-tilted reference test combined with tilted sample, (4) penta-prism scanning mode without a reference beam correction, (5) non-tilted reference using a second optical head, and (6) alternate switching of data acquisition between the sample and reference beams. With a non-tilted reference method, the measurement accuracy can be improved significantly. Some measurement results are presented. Systematic error in the sample beam arm is not addressed in this paper and should be treated separately

  5. RHIC susceptibility to variations in systematic magnetic harmonic errors

    International Nuclear Information System (INIS)

    Dell, G.F.; Peggs, S.; Pilat, F.; Satogata, T.; Tepikian, S.; Trbojevic, D.; Wei, J.

    1994-01-01

    Results of a study to determine the sensitivity of tune to uncertainties of the systematic magnetic harmonic errors in the 8 cm dipoles of RHIC are reported. Tolerances specified to the manufacturer for tooling and fabrication can result in systematic harmonics different from the expected values. Limits on the range of systematic harmonics have been established from magnet calculations, and the impact on tune from such harmonics has been established

  6. Systematic Review of Errors in Inhaler Use

    DEFF Research Database (Denmark)

    Sanchis, Joaquin; Gich, Ignasi; Pedersen, Søren

    2016-01-01

    in these outcomes over these 40 years and when partitioned into years 1 to 20 and years 21 to 40. Analyses were conducted in accordance with recommendations from Preferred Reporting Items for Systematic Reviews and Meta-Analyses and Strengthening the Reporting of Observational Studies in Epidemiology. Results Data...... A systematic search for articles reporting direct observation of inhaler technique by trained personnel covered the period from 1975 to 2014. Outcomes were the nature and frequencies of the three most common errors; the percentage of patients demonstrating correct, acceptable, or poor technique; and variations...

  7. Improving probabilistic prediction of daily streamflow by identifying Pareto optimal approaches for modelling heteroscedastic residual errors

    Science.gov (United States)

    David, McInerney; Mark, Thyer; Dmitri, Kavetski; George, Kuczera

    2017-04-01

    This study provides guidance to hydrological researchers which enables them to provide probabilistic predictions of daily streamflow with the best reliability and precision for different catchment types (e.g. high/low degree of ephemerality). Reliable and precise probabilistic prediction of daily catchment-scale streamflow requires statistical characterization of residual errors of hydrological models. It is commonly known that hydrological model residual errors are heteroscedastic, i.e. there is a pattern of larger errors in higher streamflow predictions. Although multiple approaches exist for representing this heteroscedasticity, few studies have undertaken a comprehensive evaluation and comparison of these approaches. This study fills this research gap by evaluating 8 common residual error schemes, including standard and weighted least squares, the Box-Cox transformation (with fixed and calibrated power parameter, lambda) and the log-sinh transformation. Case studies include 17 perennial and 6 ephemeral catchments in Australia and USA, and two lumped hydrological models. We find the choice of heteroscedastic error modelling approach significantly impacts on predictive performance, though no single scheme simultaneously optimizes all performance metrics. The set of Pareto optimal schemes, reflecting performance trade-offs, comprises Box-Cox schemes with lambda of 0.2 and 0.5, and the log scheme (lambda=0, perennial catchments only). These schemes significantly outperform even the average-performing remaining schemes (e.g., across ephemeral catchments, median precision tightens from 105% to 40% of observed streamflow, and median biases decrease from 25% to 4%). Theoretical interpretations of empirical results highlight the importance of capturing the skew/kurtosis of raw residuals and reproducing zero flows. Recommendations for researchers and practitioners seeking robust residual error schemes for practical work are provided.

  8. Numerical study of the systematic error in Monte Carlo schemes for semiconductors

    Energy Technology Data Exchange (ETDEWEB)

    Muscato, Orazio [Univ. degli Studi di Catania (Italy). Dipt. di Matematica e Informatica; Di Stefano, Vincenza [Univ. degli Studi di Messina (Italy). Dipt. di Matematica; Wagner, Wolfgang [Weierstrass-Institut fuer Angewandte Analysis und Stochastik (WIAS) im Forschungsverbund Berlin e.V. (Germany)

    2008-07-01

    The paper studies the convergence behavior of Monte Carlo schemes for semiconductors. A detailed analysis of the systematic error with respect to numerical parameters is performed. Different sources of systematic error are pointed out and illustrated in a spatially one-dimensional test case. The error with respect to the number of simulation particles occurs during the calculation of the internal electric field. The time step error, which is related to the splitting of transport and electric field calculations, vanishes sufficiently fast. The error due to the approximation of the trajectories of particles depends on the ODE solver used in the algorithm. It is negligible compared to the other sources of time step error, when a second order Runge-Kutta solver is used. The error related to the approximate scattering mechanism is the most significant source of error with respect to the time step. (orig.)

  9. Residual-based a posteriori error estimation for multipoint flux mixed finite element methods

    KAUST Repository

    Du, Shaohong; Sun, Shuyu; Xie, Xiaoping

    2015-01-01

    A novel residual-type a posteriori error analysis technique is developed for multipoint flux mixed finite element methods for flow in porous media in two or three space dimensions. The derived a posteriori error estimator for the velocity and pressure error in L-norm consists of discretization and quadrature indicators, and is shown to be reliable and efficient. The main tools of analysis are a locally postprocessed approximation to the pressure solution of an auxiliary problem and a quadrature error estimate. Numerical experiments are presented to illustrate the competitive behavior of the estimator.

  10. Residual-based a posteriori error estimation for multipoint flux mixed finite element methods

    KAUST Repository

    Du, Shaohong

    2015-10-26

    A novel residual-type a posteriori error analysis technique is developed for multipoint flux mixed finite element methods for flow in porous media in two or three space dimensions. The derived a posteriori error estimator for the velocity and pressure error in L-norm consists of discretization and quadrature indicators, and is shown to be reliable and efficient. The main tools of analysis are a locally postprocessed approximation to the pressure solution of an auxiliary problem and a quadrature error estimate. Numerical experiments are presented to illustrate the competitive behavior of the estimator.

  11. The systematic and random errors determination using realtime 3D surface tracking system in breast cancer

    International Nuclear Information System (INIS)

    Kanphet, J; Suriyapee, S; Sanghangthum, T; Kumkhwao, J; Wisetrintong, M; Dumrongkijudom, N

    2016-01-01

    The purpose of this study to determine the patient setup uncertainties in deep inspiration breath-hold (DIBH) radiation therapy for left breast cancer patients using real-time 3D surface tracking system. The six breast cancer patients treated by 6 MV photon beams from TrueBeam linear accelerator were selected. The patient setup errors and motion during treatment were observed and calculated for interfraction and intrafraction motions. The systematic and random errors were calculated in vertical, longitudinal and lateral directions. From 180 images tracking before and during treatment, the maximum systematic error of interfraction and intrafraction motions were 0.56 mm and 0.23 mm, the maximum random error of interfraction and intrafraction motions were 1.18 mm and 0.53 mm, respectively. The interfraction was more pronounce than the intrafraction, while the systematic error was less impact than random error. In conclusion the intrafraction motion error from patient setup uncertainty is about half of interfraction motion error, which is less impact due to the stability in organ movement from DIBH. The systematic reproducibility is also half of random error because of the high efficiency of modern linac machine that can reduce the systematic uncertainty effectively, while the random errors is uncontrollable. (paper)

  12. Reducing systematic errors in measurements made by a SQUID magnetometer

    International Nuclear Information System (INIS)

    Kiss, L.F.; Kaptás, D.; Balogh, J.

    2014-01-01

    A simple method is described which reduces those systematic errors of a superconducting quantum interference device (SQUID) magnetometer that arise from possible radial displacements of the sample in the second-order gradiometer superconducting pickup coil. By rotating the sample rod (and hence the sample) around its axis into a position where the best fit is obtained to the output voltage of the SQUID as the sample is moved through the pickup coil, the accuracy of measuring magnetic moments can be increased significantly. In the cases of an examined Co 1.9 Fe 1.1 Si Heusler alloy, pure iron and nickel samples, the accuracy could be increased over the value given in the specification of the device. The suggested method is only meaningful if the measurement uncertainty is dominated by systematic errors – radial displacement in particular – and not by instrumental or environmental noise. - Highlights: • A simple method is described which reduces systematic errors of a SQUID. • The errors arise from a radial displacement of the sample in the gradiometer coil. • The procedure is to rotate the sample rod (with the sample) around its axis. • The best fit to the SQUID voltage has to be attained moving the sample through the coil. • The accuracy of measuring magnetic moment can be increased significantly

  13. Improving probabilistic prediction of daily streamflow by identifying Pareto optimal approaches for modeling heteroscedastic residual errors

    Science.gov (United States)

    McInerney, David; Thyer, Mark; Kavetski, Dmitri; Lerat, Julien; Kuczera, George

    2017-03-01

    Reliable and precise probabilistic prediction of daily catchment-scale streamflow requires statistical characterization of residual errors of hydrological models. This study focuses on approaches for representing error heteroscedasticity with respect to simulated streamflow, i.e., the pattern of larger errors in higher streamflow predictions. We evaluate eight common residual error schemes, including standard and weighted least squares, the Box-Cox transformation (with fixed and calibrated power parameter λ) and the log-sinh transformation. Case studies include 17 perennial and 6 ephemeral catchments in Australia and the United States, and two lumped hydrological models. Performance is quantified using predictive reliability, precision, and volumetric bias metrics. We find the choice of heteroscedastic error modeling approach significantly impacts on predictive performance, though no single scheme simultaneously optimizes all performance metrics. The set of Pareto optimal schemes, reflecting performance trade-offs, comprises Box-Cox schemes with λ of 0.2 and 0.5, and the log scheme (λ = 0, perennial catchments only). These schemes significantly outperform even the average-performing remaining schemes (e.g., across ephemeral catchments, median precision tightens from 105% to 40% of observed streamflow, and median biases decrease from 25% to 4%). Theoretical interpretations of empirical results highlight the importance of capturing the skew/kurtosis of raw residuals and reproducing zero flows. Paradoxically, calibration of λ is often counterproductive: in perennial catchments, it tends to overfit low flows at the expense of abysmal precision in high flows. The log-sinh transformation is dominated by the simpler Pareto optimal schemes listed above. Recommendations for researchers and practitioners seeking robust residual error schemes for practical work are provided.

  14. 'When measurements mean action' decision models for portal image review to eliminate systematic set-up errors

    International Nuclear Information System (INIS)

    Wratten, C.R.; Denham, J.W.; O; Brien, P.; Hamilton, C.S.; Kron, T.; London Regional Cancer Centre, London, Ontario

    2004-01-01

    The aim of the present paper is to evaluate how the use of decision models in the review of portal images can eliminate systematic set-up errors during conformal therapy. Sixteen patients undergoing four-field irradiation of prostate cancer have had daily portal images obtained during the first two treatment weeks and weekly thereafter. The magnitude of random and systematic variations has been calculated by comparison of the portal image with the reference simulator images using the two-dimensional decision model embodied in the Hotelling's evaluation process (HEP). Random day-to-day set-up variation was small in this group of patients. Systematic errors were, however, common. In 15 of 16 patients, one or more errors of >2 mm were diagnosed at some stage during treatment. Sixteen of the 23 errors were between 2 and 4 mm. Although there were examples of oversensitivity of the HEP in three cases, and one instance of undersensitivity, the HEP proved highly sensitive to the small (2-4 mm) systematic errors that must be eliminated during high precision radiotherapy. The HEP has proven valuable in diagnosing very small ( 4 mm) systematic errors using one-dimensional decision models, HEP can eliminate the majority of systematic errors during the first 2 treatment weeks. Copyright (2004) Blackwell Science Pty Ltd

  15. Global CO2 flux inversions from remote-sensing data with systematic errors using hierarchical statistical models

    Science.gov (United States)

    Zammit-Mangion, Andrew; Stavert, Ann; Rigby, Matthew; Ganesan, Anita; Rayner, Peter; Cressie, Noel

    2017-04-01

    The Orbiting Carbon Observatory-2 (OCO-2) satellite was launched on 2 July 2014, and it has been a source of atmospheric CO2 data since September 2014. The OCO-2 dataset contains a number of variables, but the one of most interest for flux inversion has been the column-averaged dry-air mole fraction (in units of ppm). These global level-2 data offer the possibility of inferring CO2 fluxes at Earth's surface and tracking those fluxes over time. However, as well as having a component of random error, the OCO-2 data have a component of systematic error that is dependent on the instrument's mode, namely land nadir, land glint, and ocean glint. Our statistical approach to CO2-flux inversion starts with constructing a statistical model for the random and systematic errors with parameters that can be estimated from the OCO-2 data and possibly in situ sources from flasks, towers, and the Total Column Carbon Observing Network (TCCON). Dimension reduction of the flux field is achieved through the use of physical basis functions, while temporal evolution of the flux is captured by modelling the basis-function coefficients as a vector autoregressive process. For computational efficiency, flux inversion uses only three months of sensitivities of mole fraction to changes in flux, computed using MOZART; any residual variation is captured through the modelling of a stochastic process that varies smoothly as a function of latitude. The second stage of our statistical approach is to simulate from the posterior distribution of the basis-function coefficients and all unknown parameters given the data using a fully Bayesian Markov chain Monte Carlo (MCMC) algorithm. Estimates and posterior variances of the flux field can then be obtained straightforwardly from this distribution. Our statistical approach is different than others, as it simultaneously makes inference (and quantifies uncertainty) on both the error components' parameters and the CO2 fluxes. We compare it to more classical

  16. Diffraction grating strain gauge method: error analysis and its application for the residual stress measurement in thermal barrier coatings

    Science.gov (United States)

    Yin, Yuanjie; Fan, Bozhao; He, Wei; Dai, Xianglu; Guo, Baoqiao; Xie, Huimin

    2018-03-01

    Diffraction grating strain gauge (DGSG) is an optical strain measurement method. Based on this method, a six-spot diffraction grating strain gauge (S-DGSG) system has been developed with the advantages of high and adjustable sensitivity, compact structure, and non-contact measurement. In this study, this system is applied for the residual stress measurement in thermal barrier coatings (TBCs) combining the hole-drilling method. During the experiment, the specimen’s location is supposed to be reset accurately before and after the hole-drilling, however, it is found that the rigid body displacements from the resetting process could seriously influence the measurement accuracy. In order to understand and eliminate the effects from the rigid body displacements, such as the three-dimensional (3D) rotations and the out-of-plane displacement of the grating, the measurement error of this system is systematically analyzed, and an optimized method is proposed. Moreover, a numerical experiment and a verified tensile test are conducted, and the results verify the applicability of this optimized method successfully. Finally, combining this optimized method, a residual stress measurement experiment is conducted, and the results show that this method can be applied to measure the residual stress in TBCs.

  17. Residual translational and rotational errors after kV X-ray image-guided correction of prostate location using implanted fiducials

    International Nuclear Information System (INIS)

    Wust, Peter; Graf, Reinhold; Boehmer, Dirk; Budach, Volker

    2010-01-01

    Purpose: To evaluate the residual errors and required safety margins after stereoscopic kilovoltage (kV) X-ray target localization of the prostate in image-guided radiotherapy (IGRT) using internal fiducials. Patients and Methods: Radiopaque fiducial markers (FMs) have been inserted into the prostate in a cohort of 33 patients. The ExacTrac/Novalis Body trademark X-ray 6d image acquisition system (BrainLAB AG, Feldkirchen, Germany) was used. Corrections were performed in left-right (LR), anterior-posterior (AP), and superior-inferior (SI) direction. Rotational errors around LR (x-axis), AP (y) and SI (z) have been recorded for the first series of nine patients, and since 2007 for the subsequent 24 patients in addition corrected in each fraction by using the Robotic Tilt Module trademark and Varian Exact Couch trademark. After positioning, a second set of X-ray images was acquired for verification purposes. Residual errors were registered and again corrected. Results: Standard deviations (SD) of residual translational random errors in LR, AP, and SI coordinates were 1.3, 1.7, and 2.2 mm. Residual random rotation errors were found for lateral (around x, tilt), vertical (around y, table), and longitudinal (around z, roll) and of 3.2 , 1.8 , and 1.5 . Planning target volume (PTV)-clinical target volume (CTV) margins were calculated in LR, AP, and SI direction to 2.3, 3.0, and 3.7 mm. After a second repositioning, the margins could be reduced to 1.8, 2.1, and 1.8 mm. Conclusion: On the basis of the residual setup error measurements, the margin required after one to two online X-ray corrections for the patients enrolled in this study would be at minimum 2 mm. The contribution of intrafractional motion to residual random errors has to be evaluated. (orig.)

  18. Residual translational and rotational errors after kV X-ray image-guided correction of prostate location using implanted fiducials

    Energy Technology Data Exchange (ETDEWEB)

    Wust, Peter [Dept. of Radiation Oncology, Charite - Univ. Medicine Berlin, Campus Virchow-Klinikum, Berlin (Germany); Graf, Reinhold; Boehmer, Dirk; Budach, Volker

    2010-10-15

    Purpose: To evaluate the residual errors and required safety margins after stereoscopic kilovoltage (kV) X-ray target localization of the prostate in image-guided radiotherapy (IGRT) using internal fiducials. Patients and Methods: Radiopaque fiducial markers (FMs) have been inserted into the prostate in a cohort of 33 patients. The ExacTrac/Novalis Body trademark X-ray 6d image acquisition system (BrainLAB AG, Feldkirchen, Germany) was used. Corrections were performed in left-right (LR), anterior-posterior (AP), and superior-inferior (SI) direction. Rotational errors around LR (x-axis), AP (y) and SI (z) have been recorded for the first series of nine patients, and since 2007 for the subsequent 24 patients in addition corrected in each fraction by using the Robotic Tilt Module trademark and Varian Exact Couch trademark. After positioning, a second set of X-ray images was acquired for verification purposes. Residual errors were registered and again corrected. Results: Standard deviations (SD) of residual translational random errors in LR, AP, and SI coordinates were 1.3, 1.7, and 2.2 mm. Residual random rotation errors were found for lateral (around x, tilt), vertical (around y, table), and longitudinal (around z, roll) and of 3.2 , 1.8 , and 1.5 . Planning target volume (PTV)-clinical target volume (CTV) margins were calculated in LR, AP, and SI direction to 2.3, 3.0, and 3.7 mm. After a second repositioning, the margins could be reduced to 1.8, 2.1, and 1.8 mm. Conclusion: On the basis of the residual setup error measurements, the margin required after one to two online X-ray corrections for the patients enrolled in this study would be at minimum 2 mm. The contribution of intrafractional motion to residual random errors has to be evaluated. (orig.)

  19. Medication Errors in the Southeast Asian Countries: A Systematic Review.

    Directory of Open Access Journals (Sweden)

    Shahrzad Salmasi

    Full Text Available Medication error (ME is a worldwide issue, but most studies on ME have been undertaken in developed countries and very little is known about ME in Southeast Asian countries. This study aimed systematically to identify and review research done on ME in Southeast Asian countries in order to identify common types of ME and estimate its prevalence in this region.The literature relating to MEs in Southeast Asian countries was systematically reviewed in December 2014 by using; Embase, Medline, Pubmed, ProQuest Central and the CINAHL. Inclusion criteria were studies (in any languages that investigated the incidence and the contributing factors of ME in patients of all ages.The 17 included studies reported data from six of the eleven Southeast Asian countries: five studies in Singapore, four in Malaysia, three in Thailand, three in Vietnam, one in the Philippines and one in Indonesia. There was no data on MEs in Brunei, Laos, Cambodia, Myanmar and Timor. Of the seventeen included studies, eleven measured administration errors, four focused on prescribing errors, three were done on preparation errors, three on dispensing errors and two on transcribing errors. There was only one study of reconciliation error. Three studies were interventional.The most frequently reported types of administration error were incorrect time, omission error and incorrect dose. Staff shortages, and hence heavy workload for nurses, doctor/nurse distraction, and misinterpretation of the prescription/medication chart, were identified as contributing factors of ME. There is a serious lack of studies on this topic in this region which needs to be addressed if the issue of ME is to be fully understood and addressed.

  20. Tackling systematic errors in quantum logic gates with composite rotations

    International Nuclear Information System (INIS)

    Cummins, Holly K.; Llewellyn, Gavin; Jones, Jonathan A.

    2003-01-01

    We describe the use of composite rotations to combat systematic errors in single-qubit quantum logic gates and discuss three families of composite rotations which can be used to correct off-resonance and pulse length errors. Although developed and described within the context of nuclear magnetic resonance quantum computing, these sequences should be applicable to any implementation of quantum computation

  1. Residual setup errors caused by rotation and non-rigid motion in prone-treated cervical cancer patients after online CBCT image-guidance

    International Nuclear Information System (INIS)

    Ahmad, Rozilawati; Hoogeman, Mischa S.; Quint, Sandra; Mens, Jan Willem; Osorio, Eliana M. Vásquez; Heijmen, Ben J.M.

    2012-01-01

    Purpose: To quantify the impact of uncorrected or partially corrected pelvis rotation and spine bending on region-specific residual setup errors in prone-treated cervical cancer patients. Methods and materials: Fifteen patients received an in-room CBCT scan twice a week. CBCT scans were registered to the planning CT-scan using a pelvic clip box and considering both translations and rotations. For daily correction of the detected translational pelvis setup errors by couch shifts, residual setup errors were determined for L5, L4 and seven other points of interest (POIs). The same was done for a procedure with translational corrections and limited rotational correction (±3°) by a 6D positioning device. Results: With translational correction only, residual setup errors were large especially for L5/L4 in AP direction (Σ = 5.1/5.5 mm). For the 7 POIs the residual setup errors ranged from 1.8 to 5.6 mm (AP). Using the 6D positioning device, the errors were substantially smaller (for L5/L4 in AP direction Σ = 2.7/2.2 mm). Using this device, the percentage of fractions with a residual AP displacement for L4 > 5 mm reduced from 47% to 9%. Conclusions: Setup variations caused by pelvis rotations are large and cannot be ignored in prone treatment of cervical cancer patients. Corrections with a 6D positioning device may considerably reduce resulting setup errors, but the residual setup errors should still be accounted for by appropriate CTV-to-PTV margins.

  2. Error estimation and global fitting in transverse-relaxation dispersion experiments to determine chemical-exchange parameters

    International Nuclear Information System (INIS)

    Ishima, Rieko; Torchia, Dennis A.

    2005-01-01

    Off-resonance effects can introduce significant systematic errors in R 2 measurements in constant-time Carr-Purcell-Meiboom-Gill (CPMG) transverse relaxation dispersion experiments. For an off-resonance chemical shift of 500 Hz, 15 N relaxation dispersion profiles obtained from experiment and computer simulation indicated a systematic error of ca. 3%. This error is three- to five-fold larger than the random error in R 2 caused by noise. Good estimates of total R 2 uncertainty are critical in order to obtain accurate estimates in optimized chemical exchange parameters and their uncertainties derived from χ 2 minimization of a target function. Here, we present a simple empirical approach that provides a good estimate of the total error (systematic + random) in 15 N R 2 values measured for the HIV protease. The advantage of this empirical error estimate is that it is applicable even when some of the factors that contribute to the off-resonance error are not known. These errors are incorporated into a χ 2 minimization protocol, in which the Carver-Richards equation is used fit the observed R 2 dispersion profiles, that yields optimized chemical exchange parameters and their confidence limits. Optimized parameters are also derived, using the same protein sample and data-fitting protocol, from 1 H R 2 measurements in which systematic errors are negligible. Although 1 H and 15 N relaxation profiles of individual residues were well fit, the optimized exchange parameters had large uncertainties (confidence limits). In contrast, when a single pair of exchange parameters (the exchange lifetime, τ ex , and the fractional population, p a ), were constrained to globally fit all R 2 profiles for residues in the dimer interface of the protein, confidence limits were less than 8% for all optimized exchange parameters. In addition, F-tests showed that quality of the fits obtained using τ ex , p a as global parameters were not improved when these parameters were free to fit the R

  3. Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation

    International Nuclear Information System (INIS)

    Helgesson, P.; Sjöstrand, H.; Koning, A.J.; Rydén, J.; Rochman, D.; Alhassan, E.; Pomp, S.

    2016-01-01

    In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also

  4. Effects of averaging over motion and the resulting systematic errors in radiation therapy

    International Nuclear Information System (INIS)

    Evans, Philip M; Coolens, Catherine; Nioutsikou, Elena

    2006-01-01

    The potential for systematic errors in radiotherapy of a breathing patient is considered using the statistical model of Bortfeld et al (2002 Phys. Med. Biol. 47 2203-20). It is shown that although averaging over 30 fractions does result in a narrow Gaussian distribution of errors, as predicted by the central limit theorem, the fact that one or a few samples of the breathing patient's motion distribution are used for treatment planning (in contrast to the many treatment fractions that are likely to be delivered) may result in a much larger error with a systematic component. The error distribution may be particularly large if a scan at breath-hold is used for planning. (note)

  5. Complete Systematic Error Model of SSR for Sensor Registration in ATC Surveillance Networks.

    Science.gov (United States)

    Jarama, Ángel J; López-Araquistain, Jaime; Miguel, Gonzalo de; Besada, Juan A

    2017-09-21

    In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases) is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature). It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation.

  6. Multi-isocenter stereotactic radiotherapy: implications for target dose distributions of systematic and random localization errors

    International Nuclear Information System (INIS)

    Ebert, M.A.; Zavgorodni, S.F.; Kendrick, L.A.; Weston, S.; Harper, C.S.

    2001-01-01

    Purpose: This investigation examined the effect of alignment and localization errors on dose distributions in stereotactic radiotherapy (SRT) with arced circular fields. In particular, it was desired to determine the effect of systematic and random localization errors on multi-isocenter treatments. Methods and Materials: A research version of the FastPlan system from Surgical Navigation Technologies was used to generate a series of SRT plans of varying complexity. These plans were used to examine the influence of random setup errors by recalculating dose distributions with successive setup errors convolved into the off-axis ratio data tables used in the dose calculation. The influence of systematic errors was investigated by displacing isocenters from their planned positions. Results: For single-isocenter plans, it is found that the influences of setup error are strongly dependent on the size of the target volume, with minimum doses decreasing most significantly with increasing random and systematic alignment error. For multi-isocenter plans, similar variations in target dose are encountered, with this result benefiting from the conventional method of prescribing to a lower isodose value for multi-isocenter treatments relative to single-isocenter treatments. Conclusions: It is recommended that the systematic errors associated with target localization in SRT be tracked via a thorough quality assurance program, and that random setup errors be minimized by use of a sufficiently robust relocation system. These errors should also be accounted for by incorporating corrections into the treatment planning algorithm or, alternatively, by inclusion of sufficient margins in target definition

  7. Complete Systematic Error Model of SSR for Sensor Registration in ATC Surveillance Networks

    Directory of Open Access Journals (Sweden)

    Ángel J. Jarama

    2017-09-01

    Full Text Available In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature. It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation.

  8. On systematic and statistic errors in radionuclide mass activity estimation procedure

    International Nuclear Information System (INIS)

    Smelcerovic, M.; Djuric, G.; Popovic, D.

    1989-01-01

    One of the most important requirements during nuclear accidents is the fast estimation of the mass activity of the radionuclides that suddenly and without control reach the environment. The paper points to systematic errors in the procedures of sampling, sample preparation and measurement itself, that in high degree contribute to total mass activity evaluation error. Statistic errors in gamma spectrometry as well as in total mass alpha and beta activity evaluation are also discussed. Beside, some of the possible sources of errors in the partial mass activity evaluation for some of the radionuclides are presented. The contribution of the errors in the total mass activity evaluation error is estimated and procedures that could possibly reduce it are discussed (author)

  9. ANALYSIS AND CORRECTION OF SYSTEMATIC HEIGHT MODEL ERRORS

    Directory of Open Access Journals (Sweden)

    K. Jacobsen

    2016-06-01

    Full Text Available The geometry of digital height models (DHM determined with optical satellite stereo combinations depends upon the image orientation, influenced by the satellite camera, the system calibration and attitude registration. As standard these days the image orientation is available in form of rational polynomial coefficients (RPC. Usually a bias correction of the RPC based on ground control points is required. In most cases the bias correction requires affine transformation, sometimes only shifts, in image or object space. For some satellites and some cases, as caused by small base length, such an image orientation does not lead to the possible accuracy of height models. As reported e.g. by Yong-hua et al. 2015 and Zhang et al. 2015, especially the Chinese stereo satellite ZiYuan-3 (ZY-3 has a limited calibration accuracy and just an attitude recording of 4 Hz which may not be satisfying. Zhang et al. 2015 tried to improve the attitude based on the color sensor bands of ZY-3, but the color images are not always available as also detailed satellite orientation information. There is a tendency of systematic deformation at a Pléiades tri-stereo combination with small base length. The small base length enlarges small systematic errors to object space. But also in some other satellite stereo combinations systematic height model errors have been detected. The largest influence is the not satisfying leveling of height models, but also low frequency height deformations can be seen. A tilt of the DHM by theory can be eliminated by ground control points (GCP, but often the GCP accuracy and distribution is not optimal, not allowing a correct leveling of the height model. In addition a model deformation at GCP locations may lead to not optimal DHM leveling. Supported by reference height models better accuracy has been reached. As reference height model the Shuttle Radar Topography Mission (SRTM digital surface model (DSM or the new AW3D30 DSM, based on ALOS

  10. Random and systematic beam modulator errors in dynamic intensity modulated radiotherapy

    International Nuclear Information System (INIS)

    Parsai, Homayon; Cho, Paul S; Phillips, Mark H; Giansiracusa, Robert S; Axen, David

    2003-01-01

    This paper reports on the dosimetric effects of random and systematic modulator errors in delivery of dynamic intensity modulated beams. A sliding-widow type delivery that utilizes a combination of multileaf collimators (MLCs) and backup diaphragms was examined. Gaussian functions with standard deviations ranging from 0.5 to 1.5 mm were used to simulate random positioning errors. A clinical example involving a clival meningioma was chosen with optic chiasm and brain stem as limiting critical structures in the vicinity of the tumour. Dose calculations for different modulator fluctuations were performed, and a quantitative analysis was carried out based on cumulative and differential dose volume histograms for the gross target volume and surrounding critical structures. The study indicated that random modulator errors have a strong tendency to reduce minimum target dose and homogeneity. Furthermore, it was shown that random perturbation of both MLCs and backup diaphragms in the order of σ = 1 mm can lead to 5% errors in prescribed dose. In comparison, when MLCs or backup diaphragms alone was perturbed, the system was more robust and modulator errors of at least σ = 1.5 mm were required to cause dose discrepancies greater than 5%. For systematic perturbation, even errors in the order of ±0.5 mm were shown to result in significant dosimetric deviations

  11. ac driving amplitude dependent systematic error in scanning Kelvin probe microscope measurements: Detection and correction

    International Nuclear Information System (INIS)

    Wu Yan; Shannon, Mark A.

    2006-01-01

    The dependence of the contact potential difference (CPD) reading on the ac driving amplitude in scanning Kelvin probe microscope (SKPM) hinders researchers from quantifying true material properties. We show theoretically and demonstrate experimentally that an ac driving amplitude dependence in the SKPM measurement can come from a systematic error, and it is common for all tip sample systems as long as there is a nonzero tracking error in the feedback control loop of the instrument. We further propose a methodology to detect and to correct the ac driving amplitude dependent systematic error in SKPM measurements. The true contact potential difference can be found by applying a linear regression to the measured CPD versus one over ac driving amplitude data. Two scenarios are studied: (a) when the surface being scanned by SKPM is not semiconducting and there is an ac driving amplitude dependent systematic error; (b) when a semiconductor surface is probed and asymmetric band bending occurs when the systematic error is present. Experiments are conducted using a commercial SKPM and CPD measurement results of two systems: platinum-iridium/gap/gold and platinum-iridium/gap/thermal oxide/silicon are discussed

  12. Phase Error Modeling and Its Impact on Precise Orbit Determination of GRACE Satellites

    Directory of Open Access Journals (Sweden)

    Jia Tu

    2012-01-01

    Full Text Available Limiting factors for the precise orbit determination (POD of low-earth orbit (LEO satellite using dual-frequency GPS are nowadays mainly encountered with the in-flight phase error modeling. The phase error is modeled as a systematic and a random component each depending on the direction of GPS signal reception. The systematic part and standard deviation of random part in phase error model are, respectively, estimated by bin-wise mean and standard deviation values of phase postfit residuals computed by orbit determination. By removing the systematic component and adjusting the weight of phase observation data according to standard deviation of random component, the orbit can be further improved by POD approach. The GRACE data of 1–31 January 2006 are processed, and three types of orbit solutions, POD without phase error model correction, POD with mean value correction of phase error model, and POD with phase error model correction, are obtained. The three-dimensional (3D orbit improvements derived from phase error model correction are 0.0153 m for GRACE A and 0.0131 m for GRACE B, and the 3D influences arisen from random part of phase error model are 0.0068 m and 0.0075 m for GRACE A and GRACE B, respectively. Thus the random part of phase error model cannot be neglected for POD. It is also demonstrated by phase postfit residual analysis, orbit comparison with JPL precise science orbit, and orbit validation with KBR data that the results derived from POD with phase error model correction are better than another two types of orbit solutions generated in this paper.

  13. Systematic investigation of SLC final focus tolerances to errors

    International Nuclear Information System (INIS)

    Napoly, O.

    1996-10-01

    In this paper we review the tolerances of the SLC final focus system. To calculate these tolerances we used the error analysis routine of the program FFADA which has been written to aid the design and the analysis of final focus systems for the future linear colliders. This routine, complete by S. Fartoukh, systematically reviews the errors generated by the geometric 6-d Euclidean displacements of each magnet as well as by the field errors (normal and skew) up to the sextipolar order. It calculates their effects on the orbit and the transfer matrix at the second order in the errors, thus including cross-talk between errors originating from two different magnets. It also translates these effects in terms of tolerance derived from spot size growth and luminosity loss. We have run the routine for the following set of beam IP parameters: σ * x = 2.1 μm; σ * x' = 300 μrd; σ * x = 1 mm; σ * y = 0.55 μm; σ * y' = 200 μrd; σ * b = 2 x 10 -3 . The resulting errors and tolerances are displayed in a series of histograms which are reproduced in this paper. (author)

  14. Impact of systematic errors on DVH parameters of different OAR and target volumes in Intracavitary Brachytherapy (ICBT)

    International Nuclear Information System (INIS)

    Mourya, Ankur; Singh, Gaganpreet; Kumar, Vivek; Oinam, Arun S.

    2016-01-01

    Aim of this study is to analyze the impact of systematic errors on DVH parameters of different OAR and Target volumes in intracavitary brachytherapy (ICBT). To quantify the changes in dose-volume histogram parameters due to systematic errors in applicator reconstruction of brachytherapy planning, known errors in catheter reconstructions have to be introduced in applicator coordinate system

  15. A method for the estimation of the residual error in the SALP approach for fault tree analysis

    International Nuclear Information System (INIS)

    Astolfi, M.; Contini, S.

    1980-01-01

    The aim of this report is the illustration of the algorithms implemented in the SALP-MP code for the estimation of the residual error. These algorithms are of more general use, and it would be possible to implement them on all codes of the series SALP previously developed, as well as, with minor modifications, to analysis procedures based on 'top-down' approaches. At the time, combined 'top-down' - 'bottom up' procedures are being studied in order to take advantage from both approaches for further reduction of computer time and better estimation of the residual error, for which the developed algorithms are still applicable

  16. Medication errors in the Middle East countries: a systematic review of the literature.

    Science.gov (United States)

    Alsulami, Zayed; Conroy, Sharon; Choonara, Imti

    2013-04-01

    Medication errors are a significant global concern and can cause serious medical consequences for patients. Little is known about medication errors in Middle Eastern countries. The objectives of this systematic review were to review studies of the incidence and types of medication errors in Middle Eastern countries and to identify the main contributory factors involved. A systematic review of the literature related to medication errors in Middle Eastern countries was conducted in October 2011 using the following databases: Embase, Medline, Pubmed, the British Nursing Index and the Cumulative Index to Nursing & Allied Health Literature. The search strategy included all ages and languages. Inclusion criteria were that the studies assessed or discussed the incidence of medication errors and contributory factors to medication errors during the medication treatment process in adults or in children. Forty-five studies from 10 of the 15 Middle Eastern countries met the inclusion criteria. Nine (20 %) studies focused on medication errors in paediatric patients. Twenty-one focused on prescribing errors, 11 measured administration errors, 12 were interventional studies and one assessed transcribing errors. Dispensing and documentation errors were inadequately evaluated. Error rates varied from 7.1 % to 90.5 % for prescribing and from 9.4 % to 80 % for administration. The most common types of prescribing errors reported were incorrect dose (with an incidence rate from 0.15 % to 34.8 % of prescriptions), wrong frequency and wrong strength. Computerised physician rder entry and clinical pharmacist input were the main interventions evaluated. Poor knowledge of medicines was identified as a contributory factor for errors by both doctors (prescribers) and nurses (when administering drugs). Most studies did not assess the clinical severity of the medication errors. Studies related to medication errors in the Middle Eastern countries were relatively few in number and of poor quality

  17. Correcting the Standard Errors of 2-Stage Residual Inclusion Estimators for Mendelian Randomization Studies.

    Science.gov (United States)

    Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A

    2017-11-01

    Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.

  18. Electronic portal image assisted reduction of systematic set-up errors in head and neck irradiation

    International Nuclear Information System (INIS)

    Boer, Hans C.J. de; Soernsen de Koste, John R. van; Creutzberg, Carien L.; Visser, Andries G.; Levendag, Peter C.; Heijmen, Ben J.M.

    2001-01-01

    Purpose: To quantify systematic and random patient set-up errors in head and neck irradiation and to investigate the impact of an off-line correction protocol on the systematic errors. Material and methods: Electronic portal images were obtained for 31 patients treated for primary supra-glottic larynx carcinoma who were immobilised using a polyvinyl chloride cast. The observed patient set-up errors were input to the shrinking action level (SAL) off-line decision protocol and appropriate set-up corrections were applied. To assess the impact of the protocol, the positioning accuracy without application of set-up corrections was reconstructed. Results: The set-up errors obtained without set-up corrections (1 standard deviation (SD)=1.5-2 mm for random and systematic errors) were comparable to those reported in other studies on similar fixation devices. On an average, six fractions per patient were imaged and the set-up of half the patients was changed due to the decision protocol. Most changes were detected during weekly check measurements, not during the first days of treatment. The application of the SAL protocol reduced the width of the distribution of systematic errors to 1 mm (1 SD), as expected from simulations. A retrospective analysis showed that this accuracy should be attainable with only two measurements per patient using a different off-line correction protocol, which does not apply action levels. Conclusions: Off-line verification protocols can be particularly effective in head and neck patients due to the smallness of the random set-up errors. The excellent set-up reproducibility that can be achieved with such protocols enables accurate dose delivery in conformal treatments

  19. Characterization of electromagnetic fields in the αSPECTspectrometer and reduction of systematic errors

    International Nuclear Information System (INIS)

    Ayala Guardia, Fidel

    2011-10-01

    The aSPECT spectrometer has been designed to measure, with high precision, the recoil proton spectrum of the free neutron decay. From this spectrum, the electron antineutrino angular correlation coefficient a can be extracted with high accuracy. The goal of the experiment is to determine the coefficient a with a total relative error smaller than 0.3%, well below the current literature value of 5%. First measurements with the aSPECT spectrometer were performed in the Forschungs-Neutronenquelle Heinz Maier-Leibnitz in Munich. However, time-dependent background instabilities prevented us from reporting a new value of a. The contents of this thesis are based on the latest measurements performed with the aSPECT spectrometer at the Institut Laue-Langevin (ILL) in Grenoble, France. In these measurements, background instabilities were considerably reduced. Furthermore, diverse modifications intended to minimize systematic errors and to achieve a more reliable setup were successfully performed. Unfortunately, saturation effects of the detector electronics turned out to be too high to determine a meaningful result. However, this and other systematics were identified and decreased, or even eliminated, for future aSPECT beamtimes. The central part of this work is focused on the analysis and improvement of systematic errors related to the aSPECT electromagnetic fields. This work yielded in many improvements, particularly in the reduction of the systematic effects due to electric fields. The systematics related to the aSPECT magnetic field were also minimized and determined down to a level which permits to improve the present literature value of a. Furthermore, a custom NMR-magnetometer was developed and improved during this thesis, which will lead to reduction of magnetic field-related uncertainties down to a negligible level to determine a with a total relative error of at least 0.3%.

  20. On the Source of the Systematic Errors in the Quatum Mechanical Calculation of the Superheavy Elements

    Directory of Open Access Journals (Sweden)

    Khazan A.

    2010-10-01

    Full Text Available It is shown that only the hyperbolic law of the Periodic Table of Elements allows the exact calculation for the atomic masses. The reference data of Periods 8 and 9 manifest a systematic error in the computer software applied to such a calculation (this systematic error increases with the number of the elements in the Table.

  1. On the Source of the Systematic Errors in the Quantum Mechanical Calculation of the Superheavy Elements

    Directory of Open Access Journals (Sweden)

    Khazan A.

    2010-10-01

    Full Text Available It is shown that only the hyperbolic law of the Periodic Table of Elements allows the exact calculation for the atomic masses. The reference data of Periods 8 and 9 manifest a systematic error in the computer software applied to such a calculation (this systematic error increases with the number of the elements in the Table.

  2. ERESYE - a expert system for the evaluation of uncertainties related to systematic experimental errors

    International Nuclear Information System (INIS)

    Martinelli, T.; Panini, G.C.; Amoroso, A.

    1989-11-01

    Information about systematic errors are not given In EXFOR, the data base of nuclear experimental measurements: their assessment is committed to the ability of the evaluator. A tool Is needed which performs this task in a fully automatic way or, at least, gives a valuable aid. The expert system ERESYE has been implemented for investigating the feasibility of an automatic evaluation of the systematic errors in the experiments. The features of the project which led to the implementation of the system are presented. (author)

  3. Fibonacci collocation method with a residual error Function to solve linear Volterra integro differential equations

    Directory of Open Access Journals (Sweden)

    Salih Yalcinbas

    2016-01-01

    Full Text Available In this paper, a new collocation method based on the Fibonacci polynomials is introduced to solve the high-order linear Volterra integro-differential equations under the conditions. Numerical examples are included to demonstrate the applicability and validity of the proposed method and comparisons are made with the existing results. In addition, an error estimation based on the residual functions is presented for this method. The approximate solutions are improved by using this error estimation.

  4. Effects of systematic phase errors on optimized quantum random-walk search algorithm

    International Nuclear Information System (INIS)

    Zhang Yu-Chao; Bao Wan-Su; Wang Xiang; Fu Xiang-Qun

    2015-01-01

    This study investigates the effects of systematic errors in phase inversions on the success rate and number of iterations in the optimized quantum random-walk search algorithm. Using the geometric description of this algorithm, a model of the algorithm with phase errors is established, and the relationship between the success rate of the algorithm, the database size, the number of iterations, and the phase error is determined. For a given database size, we obtain both the maximum success rate of the algorithm and the required number of iterations when phase errors are present in the algorithm. Analyses and numerical simulations show that the optimized quantum random-walk search algorithm is more robust against phase errors than Grover’s algorithm. (paper)

  5. A residual-based a posteriori error estimator for single-phase Darcy flow in fractured porous media

    KAUST Repository

    Chen, Huangxin

    2016-12-09

    In this paper we develop an a posteriori error estimator for a mixed finite element method for single-phase Darcy flow in a two-dimensional fractured porous media. The discrete fracture model is applied to model the fractures by one-dimensional fractures in a two-dimensional domain. We consider Raviart–Thomas mixed finite element method for the approximation of the coupled Darcy flows in the fractures and the surrounding porous media. We derive a robust residual-based a posteriori error estimator for the problem with non-intersecting fractures. The reliability and efficiency of the a posteriori error estimator are established for the error measured in an energy norm. Numerical results verifying the robustness of the proposed a posteriori error estimator are given. Moreover, our numerical results indicate that the a posteriori error estimator also works well for the problem with intersecting fractures.

  6. Physical predictions from lattice QCD. Reducing systematic errors

    International Nuclear Information System (INIS)

    Pittori, C.

    1994-01-01

    Some recent developments in the theoretical understanding of lattice quantum chromodynamics and of its possible sources of systematic errors are reported, and a review of some of the latest Monte Carlo results for light quarks phenomenology is presented. A very general introduction on a quantum field theory on a discrete spacetime lattice is given, and the Monte Carlo methods which allow to compute many interesting physical quantities in the non-perturbative domain of strong interactions, is illustrated. (author). 17 refs., 3 figs., 3 tabs

  7. Characterization of electromagnetic fields in the aSPECT spectrometer and reduction of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Ayala Guardia, Fidel

    2011-10-15

    The aSPECT spectrometer has been designed to measure, with high precision, the recoil proton spectrum of the free neutron decay. From this spectrum, the electron antineutrino angular correlation coefficient a can be extracted with high accuracy. The goal of the experiment is to determine the coefficient a with a total relative error smaller than 0.3%, well below the current literature value of 5%. First measurements with the aSPECT spectrometer were performed in the Forschungs-Neutronenquelle Heinz Maier-Leibnitz in Munich. However, time-dependent background instabilities prevented us from reporting a new value of a. The contents of this thesis are based on the latest measurements performed with the aSPECT spectrometer at the Institut Laue-Langevin (ILL) in Grenoble, France. In these measurements, background instabilities were considerably reduced. Furthermore, diverse modifications intended to minimize systematic errors and to achieve a more reliable setup were successfully performed. Unfortunately, saturation effects of the detector electronics turned out to be too high to determine a meaningful result. However, this and other systematics were identified and decreased, or even eliminated, for future aSPECT beamtimes. The central part of this work is focused on the analysis and improvement of systematic errors related to the aSPECT electromagnetic fields. This work yielded in many improvements, particularly in the reduction of the systematic effects due to electric fields. The systematics related to the aSPECT magnetic field were also minimized and determined down to a level which permits to improve the present literature value of a. Furthermore, a custom NMR-magnetometer was developed and improved during this thesis, which will lead to reduction of magnetic field-related uncertainties down to a negligible level to determine a with a total relative error of at least 0.3%.

  8. Improvement of the physically-based groundwater model simulations through complementary correction of its errors

    Directory of Open Access Journals (Sweden)

    Jorge Mauricio Reyes Alcalde

    2017-04-01

    Full Text Available Physically-Based groundwater Models (PBM, such MODFLOW, are used as groundwater resources evaluation tools supposing that the produced differences (residuals or errors are white noise. However, in the facts these numerical simulations usually show not only random errors but also systematic errors. For this work it has been developed a numerical procedure to deal with PBM systematic errors, studying its structure in order to model its behavior and correct the results by external and complementary means, trough a framework called Complementary Correction Model (CCM. The application of CCM to PBM shows a decrease in local biases, better distribution of errors and reductions in its temporal and spatial correlations, with 73% of reduction in global RMSN over an original PBM. This methodology seems an interesting chance to update a PBM avoiding the work and costs of interfere its internal structure.

  9. Systematic errors due to linear congruential random-number generators with the Swendsen-Wang algorithm: a warning.

    Science.gov (United States)

    Ossola, Giovanni; Sokal, Alan D

    2004-08-01

    We show that linear congruential pseudo-random-number generators can cause systematic errors in Monte Carlo simulations using the Swendsen-Wang algorithm, if the lattice size is a multiple of a very large power of 2 and one random number is used per bond. These systematic errors arise from correlations within a single bond-update half-sweep. The errors can be eliminated (or at least radically reduced) by updating the bonds in a random order or in an aperiodic manner. It also helps to use a generator of large modulus (e.g., 60 or more bits).

  10. Strategies to reduce the systematic error due to tumor and rectum motion in radiotherapy of prostate cancer

    International Nuclear Information System (INIS)

    Hoogeman, Mischa S.; Herk, Marcel van; Bois, Josien de; Lebesque, Joos V.

    2005-01-01

    Background and purpose: The goal of this work is to develop and evaluate strategies to reduce the uncertainty in the prostate position and rectum shape that arises in the preparation stage of the radiation treatment of prostate cancer. Patients and methods: Nineteen prostate cancer patients, who were treated with 3-dimensional conformal radiotherapy, received each a planning CT scan and 8-13 repeat CT scans during the treatment period. We quantified prostate motion relative to the pelvic bone by first matching the repeat CT scans on the planning CT scan using the bony anatomy. Subsequently, each contoured prostate, including seminal vesicles, was matched on the prostate in the planning CT scan to obtain the translations and rotations. The variation in prostate position was determined in terms of the systematic, random and group mean error. We tested the performance of two correction strategies to reduce the systematic error due to prostate motion. The first strategy, the pre-treatment strategy, used only the initial rectum volume in the planning CT scan to adjust the angle of the prostate with respect to the left-right (LR) axis and the shape and position of the rectum. The second strategy, the adaptive strategy, used the data of repeat CT scans to improve the estimate of the prostate position and rectum shape during the treatment. Results: The largest component of prostate motion was a rotation around the LR axis. The systematic error (1 SD) was 5.1 deg and the random error was 3.6 deg (1 SD). The average LR-axis rotation between the planning and the repeat CT scans correlated significantly with the rectum volume in the planning CT scan (r=0.86, P<0.0001). Correction of the rotational position on the basis of the planning rectum volume alone reduced the systematic error by 28%. A correction, based on the data of the planning CT scan and 4 repeat CT scans reduced the systematic error over the complete treatment period by a factor of 2. When the correction was

  11. Enhancing Intervention for Residual Rhotic Errors Via App-Delivered Biofeedback: A Case Study.

    Science.gov (United States)

    Byun, Tara McAllister; Campbell, Heather; Carey, Helen; Liang, Wendy; Park, Tae Hong; Svirsky, Mario

    2017-06-22

    Recent research suggests that visual-acoustic biofeedback can be an effective treatment for residual speech errors, but adoption remains limited due to barriers including high cost and lack of familiarity with the technology. This case study reports results from the first participant to complete a course of visual-acoustic biofeedback using a not-for-profit iOS app, Speech Therapist's App for /r/ Treatment. App-based biofeedback treatment for rhotic misarticulation was provided in weekly 30-min sessions for 20 weeks. Within-treatment progress was documented using clinician perceptual ratings and acoustic measures. Generalization gains were assessed using acoustic measures of word probes elicited during baseline, treatment, and maintenance sessions. Both clinician ratings and acoustic measures indicated that the participant significantly improved her rhotic production accuracy in trials elicited during treatment sessions. However, these gains did not transfer to generalization probes. This study provides a proof-of-concept demonstration that app-based biofeedback is a viable alternative to costlier dedicated systems. Generalization of gains to contexts without biofeedback remains a challenge that requires further study. App-delivered biofeedback could enable clinician-research partnerships that would strengthen the evidence base while providing enhanced treatment for children with residual rhotic errors. https://doi.org/10.23641/asha.5116318.

  12. On the effects of systematic errors in analysis of nuclear scattering data

    International Nuclear Information System (INIS)

    Bennett, M.T.; Steward, C.; Amos, K.; Allen, L.J.

    1995-01-01

    The effects of systematic errors on elastic scattering differential cross-section data upon the assessment of quality fits to that data have been studied. Three cases are studied, namely the differential cross-section data sets from elastic scattering of 200 MeV protons from 12 C, of 350 MeV 16 O- 16 O scattering and of 288.6 MeV 12 C- 12 C scattering. First, to estimate the probability of any unknown systematic errors, select sets of data have been processed using the method of generalized cross validation; a method based upon the premise that any data set should satisfy an optimal smoothness criterion. In another case, the S function that provided a statistically significant fit to data, upon allowance for angle variation, became overdetermined. A far simpler S function form could then be found to describe the scattering process. The S functions so obtained have been used in a fixed energy inverse scattering study to specify effective, local, Schroedinger potentials for the collisions. An error analysis has been performed on the results to specify confidence levels for those interactions. 19 refs., 6 tabs., 15 figs

  13. Design of roundness measurement model with multi-systematic error for cylindrical components with large radius.

    Science.gov (United States)

    Sun, Chuanzhi; Wang, Lei; Tan, Jiubin; Zhao, Bo; Tang, Yangchao

    2016-02-01

    The paper designs a roundness measurement model with multi-systematic error, which takes eccentricity, probe offset, radius of tip head of probe, and tilt error into account for roundness measurement of cylindrical components. The effects of the systematic errors and radius of components are analysed in the roundness measurement. The proposed method is built on the instrument with a high precision rotating spindle. The effectiveness of the proposed method is verified by experiment with the standard cylindrical component, which is measured on a roundness measuring machine. Compared to the traditional limacon measurement model, the accuracy of roundness measurement can be increased by about 2.2 μm using the proposed roundness measurement model for the object with a large radius of around 37 mm. The proposed method can improve the accuracy of roundness measurement and can be used for error separation, calibration, and comparison, especially for cylindrical components with a large radius.

  14. The Thirty Gigahertz Instrument Receiver for the QUIJOTE Experiment: Preliminary Polarization Measurements and Systematic-Error Analysis

    Directory of Open Access Journals (Sweden)

    Francisco J. Casas

    2015-08-01

    Full Text Available This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process.

  15. LOWER BOUNDS ON PHOTOMETRIC REDSHIFT ERRORS FROM TYPE Ia SUPERNOVA TEMPLATES

    International Nuclear Information System (INIS)

    Asztalos, S.; Nikolaev, S.; De Vries, W.; Olivier, S.; Cook, K.; Wang, L.

    2010-01-01

    Cosmology with Type Ia supernova heretofore has required extensive spectroscopic follow-up to establish an accurate redshift. Though this resource-intensive approach is tolerable at the present discovery rate, the next generation of ground-based all-sky survey instruments will render it unsustainable. Photometry-based redshift determination may be a viable alternative, though the technique introduces non-negligible errors that ultimately degrade the ability to discriminate between competing cosmologies. We present a strictly template-based photometric redshift estimator and compute redshift reconstruction errors in the presence of statistical errors. Under highly degraded photometric conditions corresponding to a statistical error σ of 0.5, the residual redshift error is found to be 0.236 when assuming a nightly observing cadence and a single Large Synoptic Science Telescope (LSST) u-band filter. Utilizing all six LSST bandpass filters reduces the residual redshift error to 9.1 x 10 -3 . Assuming a more optimistic statistical error σ of 0.05, we derive residual redshift errors of 4.2 x 10 -4 , 5.2 x 10 -4 , 9.2 x 10 -4 , and 1.8 x 10 -3 for observations occuring nightly, every 5th, 20th and 45th night, respectively, in each of the six LSST bandpass filters. Adopting an observing cadence in which photometry is acquired with all six filters every 5th night and a realistic supernova distribution, binned redshift errors are combined with photometric errors with a σ of 0.17 and systematic errors with a σ∼ 0.003 to derive joint errors (σ w , σ w ' ) of (0.012, 0.066), respectively, in (w,w') with 68% confidence using Fisher matrix formalism. Though highly idealized in the present context, the methodology is nonetheless quite relevant for the next generation of ground-based all-sky surveys.

  16. Positioning errors assessed with kV cone-beam CT for image-guided prostate radiotherapy

    International Nuclear Information System (INIS)

    Li Jiongyan; Guo Xiaomao; Yao Weiqiang; Wang Yanyang; Ma Jinli; Chen Jiayi; Zhang Zhen; Feng Yan

    2010-01-01

    Objective: To assess set-up errors measured with kilovoltage cone-beam CT (KV-CBCT), and the impact of online corrections on margins required to account for set-up variability during IMRT for patients with prostate cancer. Methods: Seven patients with prostate cancer undergoing IMRT were enrolled onto the study. The KV-CBCT scans were acquired at least twice weekly. After initial set-up using the skin marks, a CBCT scan was acquired and registered with the planning CT to determine the setup errors using an auto grey-scale registration software. Corrections would be made by moving the table if the setup errors were considered clinically significant (i. e. , > 2 mm). A second CBCT scan was acquired immediately after the corrections to evaluate the residual error. PTV margins were derived to account for the measured set-up errors and residual errors determined for this group of patients. Results: 197 KV-CBCT images in total were acquired. The random and systematic positioning errors and calculated PTV margins without correction in mm were : a) Lateral 3.1, 2.1, 9.3; b) Longitudinal 1.5, 1.8, 5.1;c) Vertical 4.2, 3.7, 13.0. The random and systematic positioning errors and calculated PTV margin with correction in mm were : a) Lateral 1.1, 0.9, 3.4; b) Longitudinal 0.7, 1.1, 2.5; c) Vertical 1.1, 1.3, 3.7. Conclusions: With the guidance of online KV-CBCT, set-up errors could be reduced significantly for patients with prostate cancer receiving IMRT. The margin required after online CBCT correction for the patients enrolled in the study would be appoximatively 3-4 mm. (authors)

  17. A discontinuous Poisson-Boltzmann equation with interfacial jump: homogenisation and residual error estimate.

    Science.gov (United States)

    Fellner, Klemens; Kovtunenko, Victor A

    2016-01-01

    A nonlinear Poisson-Boltzmann equation with inhomogeneous Robin type boundary conditions at the interface between two materials is investigated. The model describes the electrostatic potential generated by a vector of ion concentrations in a periodic multiphase medium with dilute solid particles. The key issue stems from interfacial jumps, which necessitate discontinuous solutions to the problem. Based on variational techniques, we derive the homogenisation of the discontinuous problem and establish a rigorous residual error estimate up to the first-order correction.

  18. Impact of residual setup error on parotid gland dose in intensity-modulated radiation therapy with or without planning organ-at-risk margin

    International Nuclear Information System (INIS)

    Delana, Anna; Menegotti, Loris; Valentini, Aldo; Bolner, Andrea; Tomio, Luigi; Vanoni, Valentina; Lohr, Frank

    2009-01-01

    Purpose: To estimate the dosimetric impact of residual setup errors on parotid sparing in head-and-neck (H and N) intensity-modulated treatments and to evaluate the effect of employing an PRV (planning organ-at-risk volume) margin for the parotid gland. Patients and methods: Ten patients treated for H and N cancer were considered. A nine-beam intensity-modulated radiotherapy (IMRT) was planned for each patient. A second optimization was performed prescribing dose constraint to the PRV of the parotid gland. Systematic setup errors of 2 mm, 3 mm, and 5 mm were simulated. The dose-volume histograms of the shifted and reference plans were compared with regard to mean parotid gland dose (MPD), normal-tissue complication probability (NTCP), and coverage of the clinical target volume (V 95% and equivalent uniform dose [EUD]); the sensitivity of parotid sparing on setup error was evaluated with a probability-based approach. Results: MPD increased by 3.4%/mm and 3.0%/mm for displacements in the craniocaudal and lateral direction and by 0.7%/mm for displacements in the anterior-posterior direction. The probability to irradiate the parotid with a mean dose > 30 Gy was > 50%, for setup errors in cranial and lateral direction and 95% and EUD variations < 1% and < 1 Gy). Conclusion: The parotid gland is more sensitive to craniocaudal and lateral displacements. A setup error of 2 mm guarantees an MPD ≤ 30 Gy in most cases, without adding a PRV margin. If greater displacements are expected/accepted, an adequate PRV margin could be used to meet the clinical parotid gland constraint of 30 Gy, without affecting target volume coverage. (orig.)

  19. Investigating Systematic Errors of the Interstellar Flow Longitude Derived from the Pickup Ion Cutoff

    Science.gov (United States)

    Taut, A.; Berger, L.; Drews, C.; Bower, J.; Keilbach, D.; Lee, M. A.; Moebius, E.; Wimmer-Schweingruber, R. F.

    2017-12-01

    Complementary to the direct neutral particle measurements performed by e.g. IBEX, the measurement of PickUp Ions (PUIs) constitutes a diagnostic tool to investigate the local interstellar medium. PUIs are former neutral particles that have been ionized in the inner heliosphere. Subsequently, they are picked up by the solar wind and its frozen-in magnetic field. Due to this process, a characteristic Velocity Distribution Function (VDF) with a sharp cutoff evolves, which carries information about the PUI's injection speed and thus the former neutral particle velocity. The symmetry of the injection speed about the interstellar flow vector is used to derive the interstellar flow longitude from PUI measurements. Using He PUI data obtained by the PLASTIC sensor on STEREO A, we investigate how this concept may be affected by systematic errors. The PUI VDF strongly depends on the orientation of the local interplanetary magnetic field. Recently injected PUIs with speeds just below the cutoff speed typically form a highly anisotropic torus distribution in velocity space, which leads to a longitudinal transport for certain magnetic field orientation. Therefore, we investigate how the selection of magnetic field configurations in the data affects the result for the interstellar flow longitude that we derive from the PUI cutoff. Indeed, we find that the results follow a systematic trend with the filtered magnetic field angles that can lead to a shift of the result up to 5°. In turn, this means that every value for the interstellar flow longitude derived from the PUI cutoff is affected by a systematic error depending on the utilized magnetic field orientations. Here, we present our observations, discuss possible reasons for the systematic trend we discovered, and indicate selections that may minimize the systematic errors.

  20. Unaccounted source of systematic errors in measurements of the Newtonian gravitational constant G

    Science.gov (United States)

    DeSalvo, Riccardo

    2015-06-01

    Many precision measurements of G have produced a spread of results incompatible with measurement errors. Clearly an unknown source of systematic errors is at work. It is proposed here that most of the discrepancies derive from subtle deviations from Hooke's law, caused by avalanches of entangled dislocations. The idea is supported by deviations from linearity reported by experimenters measuring G, similarly to what is observed, on a larger scale, in low-frequency spring oscillators. Some mitigating experimental apparatus modifications are suggested.

  1. Effect of heteroscedasticity treatment in residual error models on model calibration and prediction uncertainty estimation

    Science.gov (United States)

    Sun, Ruochen; Yuan, Huiling; Liu, Xiaoli

    2017-11-01

    The heteroscedasticity treatment in residual error models directly impacts the model calibration and prediction uncertainty estimation. This study compares three methods to deal with the heteroscedasticity, including the explicit linear modeling (LM) method and nonlinear modeling (NL) method using hyperbolic tangent function, as well as the implicit Box-Cox transformation (BC). Then a combined approach (CA) combining the advantages of both LM and BC methods has been proposed. In conjunction with the first order autoregressive model and the skew exponential power (SEP) distribution, four residual error models are generated, namely LM-SEP, NL-SEP, BC-SEP and CA-SEP, and their corresponding likelihood functions are applied to the Variable Infiltration Capacity (VIC) hydrologic model over the Huaihe River basin, China. Results show that the LM-SEP yields the poorest streamflow predictions with the widest uncertainty band and unrealistic negative flows. The NL and BC methods can better deal with the heteroscedasticity and hence their corresponding predictive performances are improved, yet the negative flows cannot be avoided. The CA-SEP produces the most accurate predictions with the highest reliability and effectively avoids the negative flows, because the CA approach is capable of addressing the complicated heteroscedasticity over the study basin.

  2. Modeling systematic errors: polychromatic sources of Beer-Lambert deviations in HPLC/UV and nonchromatographic spectrophotometric assays.

    Science.gov (United States)

    Galli, C

    2001-07-01

    It is well established that the use of polychromatic radiation in spectrophotometric assays leads to excursions from the Beer-Lambert limit. This Note models the resulting systematic error as a function of assay spectral width, slope of molecular extinction coefficient, and analyte concentration. The theoretical calculations are compared with recent experimental results; a parameter is introduced which can be used to estimate the magnitude of the systematic error in both chromatographic and nonchromatographic spectrophotometric assays. It is important to realize that the polychromatic radiation employed in common laboratory equipment can yield assay errors up to approximately 4%, even at absorption levels generally considered 'safe' (i.e. absorption <1). Thus careful consideration of instrumental spectral width, analyte concentration, and slope of molecular extinction coefficient is required to ensure robust analytical methods.

  3. End-point construction and systematic titration error in linear titration curves-complexation reactions

    NARCIS (Netherlands)

    Coenegracht, P.M.J.; Duisenberg, A.J.M.

    The systematic titration error which is introduced by the intersection of tangents to hyperbolic titration curves is discussed. The effects of the apparent (conditional) formation constant, of the concentration of the unknown component and of the ranges used for the end-point construction are

  4. The systematic error of temperature noise correlation measurement method and self-calibration

    International Nuclear Information System (INIS)

    Tian Hong; Tong Yunxian

    1993-04-01

    The turbulent transport behavior of fluid noise and the nature of noise affect on the velocity measurement system have been studied. The systematic error of velocity measurement system is analyzed. A theoretical calibration method is proposed, which makes the velocity measurement of time-correlation as an absolute measurement method. The theoretical results are in good agreement with experiments

  5. A correction method for systematic error in (1)H-NMR time-course data validated through stochastic cell culture simulation.

    Science.gov (United States)

    Sokolenko, Stanislav; Aucoin, Marc G

    2015-09-04

    The growing ubiquity of metabolomic techniques has facilitated high frequency time-course data collection for an increasing number of applications. While the concentration trends of individual metabolites can be modeled with common curve fitting techniques, a more accurate representation of the data needs to consider effects that act on more than one metabolite in a given sample. To this end, we present a simple algorithm that uses nonparametric smoothing carried out on all observed metabolites at once to identify and correct systematic error from dilution effects. In addition, we develop a simulation of metabolite concentration time-course trends to supplement available data and explore algorithm performance. Although we focus on nuclear magnetic resonance (NMR) analysis in the context of cell culture, a number of possible extensions are discussed. Realistic metabolic data was successfully simulated using a 4-step process. Starting with a set of metabolite concentration time-courses from a metabolomic experiment, each time-course was classified as either increasing, decreasing, concave, or approximately constant. Trend shapes were simulated from generic functions corresponding to each classification. The resulting shapes were then scaled to simulated compound concentrations. Finally, the scaled trends were perturbed using a combination of random and systematic errors. To detect systematic errors, a nonparametric fit was applied to each trend and percent deviations calculated at every timepoint. Systematic errors could be identified at time-points where the median percent deviation exceeded a threshold value, determined by the choice of smoothing model and the number of observed trends. Regardless of model, increasing the number of observations over a time-course resulted in more accurate error estimates, although the improvement was not particularly large between 10 and 20 samples per trend. The presented algorithm was able to identify systematic errors as small

  6. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  7. Systematic Procedural Error

    National Research Council Canada - National Science Library

    Byrne, Michael D

    2006-01-01

    .... This problem has received surprisingly little attention from cognitive psychologists. The research summarized here examines such errors in some detail both empirically and through computational cognitive modeling...

  8. Variation across mitochondrial gene trees provides evidence for systematic error: How much gene tree variation is biological?

    Science.gov (United States)

    Richards, Emilie J; Brown, Jeremy M; Barley, Anthony J; Chong, Rebecca A; Thomson, Robert C

    2018-02-19

    The use of large genomic datasets in phylogenetics has highlighted extensive topological variation across genes. Much of this discordance is assumed to result from biological processes. However, variation among gene trees can also be a consequence of systematic error driven by poor model fit, and the relative importance of biological versus methodological factors in explaining gene tree variation is a major unresolved question. Using mitochondrial genomes to control for biological causes of gene tree variation, we estimate the extent of gene tree discordance driven by systematic error and employ posterior prediction to highlight the role of model fit in producing this discordance. We find that the amount of discordance among mitochondrial gene trees is similar to the amount of discordance found in other studies that assume only biological causes of variation. This similarity suggests that the role of systematic error in generating gene tree variation is underappreciated and critical evaluation of fit between assumed models and the data used for inference is important for the resolution of unresolved phylogenetic questions.

  9. Combined influence of CT random noise and HU-RSP calibration curve nonlinearities on proton range systematic errors

    Science.gov (United States)

    Brousmiche, S.; Souris, K.; Orban de Xivry, J.; Lee, J. A.; Macq, B.; Seco, J.

    2017-11-01

    Proton range random and systematic uncertainties are the major factors undermining the advantages of proton therapy, namely, a sharp dose falloff and a better dose conformality for lower doses in normal tissues. The influence of CT artifacts such as beam hardening or scatter can easily be understood and estimated due to their large-scale effects on the CT image, like cupping and streaks. In comparison, the effects of weakly-correlated stochastic noise are more insidious and less attention is drawn on them partly due to the common belief that they only contribute to proton range uncertainties and not to systematic errors thanks to some averaging effects. A new source of systematic errors on the range and relative stopping powers (RSP) has been highlighted and proved not to be negligible compared to the 3.5% uncertainty reference value used for safety margin design. Hence, we demonstrate that the angular points in the HU-to-RSP calibration curve are an intrinsic source of proton range systematic error for typical levels of zero-mean stochastic CT noise. Systematic errors on RSP of up to 1% have been computed for these levels. We also show that the range uncertainty does not generally vary linearly with the noise standard deviation. We define a noise-dependent effective calibration curve that better describes, for a given material, the RSP value that is actually used. The statistics of the RSP and the range continuous slowing down approximation (CSDA) have been analytically derived for the general case of a calibration curve obtained by the stoichiometric calibration procedure. These models have been validated against actual CSDA simulations for homogeneous and heterogeneous synthetical objects as well as on actual patient CTs for prostate and head-and-neck treatment planning situations.

  10. Mapping the absolute magnetic field and evaluating the quadratic Zeeman-effect-induced systematic error in an atom interferometer gravimeter

    Science.gov (United States)

    Hu, Qing-Qing; Freier, Christian; Leykauf, Bastian; Schkolnik, Vladimir; Yang, Jun; Krutzik, Markus; Peters, Achim

    2017-09-01

    Precisely evaluating the systematic error induced by the quadratic Zeeman effect is important for developing atom interferometer gravimeters aiming at an accuracy in the μ Gal regime (1 μ Gal =10-8m /s2 ≈10-9g ). This paper reports on the experimental investigation of Raman spectroscopy-based magnetic field measurements and the evaluation of the systematic error in the gravimetric atom interferometer (GAIN) due to quadratic Zeeman effect. We discuss Raman duration and frequency step-size-dependent magnetic field measurement uncertainty, present vector light shift and tensor light shift induced magnetic field measurement offset, and map the absolute magnetic field inside the interferometer chamber of GAIN with an uncertainty of 0.72 nT and a spatial resolution of 12.8 mm. We evaluate the quadratic Zeeman-effect-induced gravity measurement error in GAIN as 2.04 μ Gal . The methods shown in this paper are important for precisely mapping the absolute magnetic field in vacuum and reducing the quadratic Zeeman-effect-induced systematic error in Raman transition-based precision measurements, such as atomic interferometer gravimeters.

  11. Effects of residual hearing on cochlear implant outcomes in children: A systematic-review.

    Science.gov (United States)

    Chiossi, Julia Santos Costa; Hyppolito, Miguel Angelo

    2017-09-01

    to investigate if preoperative residual hearing in prelingually deafened children can interfere on cochlear implant indication and outcomes. a systematic-review was conducted in five international databases up to November-2016, to locate articles that evaluated cochlear implantation in children with some degree of preoperative residual hearing. Outcomes were auditory, language and cognition performances after cochlear implant. The quality of the studies was assessed and classified according to the Oxford Levels of Evidence table - 2011. Risk of biases were also described. From the 30 articles reviewed, two types of questions were identified: (a) what are the benefits of cochlear implantation in children with residual hearing? (b) is the preoperative residual hearing a predictor of cochlear implant outcome? Studies ranged from 04 to 188 subjects, evaluating populations between 1.8 and 10.3 years old. The definition of residual hearing varied between studies. The majority of articles (n = 22) evaluated speech perception as the outcome and 14 also assessed language and speech production. There is evidence that cochlear implant is beneficial to children with residual hearing. Preoperative residual hearing seems to be valuable to predict speech perception outcomes after cochlear implantation, even though the mechanism of how it happens is not clear. More extensive researches must be conducted in order to make recommendations and to set prognosis for cochlear implants based on children preoperative residual hearing. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Component Analysis of Errors on PERSIANN Precipitation Estimates over Urmia Lake Basin, IRAN

    Science.gov (United States)

    Ghajarnia, N.; Daneshkar Arasteh, P.; Liaghat, A. M.; Araghinejad, S.

    2016-12-01

    In this study, PERSIANN daily dataset is evaluated from 2000 to 2011 in 69 pixels over Urmia Lake basin in northwest of Iran. Different analytical approaches and indexes are used to examine PERSIANN precision in detection and estimation of rainfall rate. The residuals are decomposed into Hit, Miss and FA estimation biases while continues decomposition of systematic and random error components are also analyzed seasonally and categorically. New interpretation of estimation accuracy named "reliability on PERSIANN estimations" is introduced while the changing manners of existing categorical/statistical measures and error components are also seasonally analyzed over different rainfall rate categories. This study yields new insights into the nature of PERSIANN errors over Urmia lake basin as a semi-arid region in the middle-east, including the followings: - The analyzed contingency table indexes indicate better detection precision during spring and fall. - A relatively constant level of error is generally observed among different categories. The range of precipitation estimates at different rainfall rate categories is nearly invariant as a sign for the existence of systematic error. - Low level of reliability is observed on PERSIANN estimations at different categories which are mostly associated with high level of FA error. However, it is observed that as the rate of precipitation increase, the ability and precision of PERSIANN in rainfall detection also increases. - The systematic and random error decomposition in this area shows that PERSIANN has more difficulty in modeling the system and pattern of rainfall rather than to have bias due to rainfall uncertainties. The level of systematic error also considerably increases in heavier rainfalls. It is also important to note that PERSIANN error characteristics at each season varies due to the condition and rainfall patterns of that season which shows the necessity of seasonally different approach for the calibration of

  13. Shifted Legendre method with residual error estimation for delay linear Fredholm integro-differential equations

    Directory of Open Access Journals (Sweden)

    Şuayip Yüzbaşı

    2017-03-01

    Full Text Available In this paper, we suggest a matrix method for obtaining the approximate solutions of the delay linear Fredholm integro-differential equations with constant coefficients using the shifted Legendre polynomials. The problem is considered with mixed conditions. Using the required matrix operations, the delay linear Fredholm integro-differential equation is transformed into a matrix equation. Additionally, error analysis for the method is presented using the residual function. Illustrative examples are given to demonstrate the efficiency of the method. The results obtained in this study are compared with the known results.

  14. Insights on the impact of systematic model errors on data assimilation performance in changing catchments

    Science.gov (United States)

    Pathiraja, S.; Anghileri, D.; Burlando, P.; Sharma, A.; Marshall, L.; Moradkhani, H.

    2018-03-01

    The global prevalence of rapid and extensive land use change necessitates hydrologic modelling methodologies capable of handling non-stationarity. This is particularly true in the context of Hydrologic Forecasting using Data Assimilation. Data Assimilation has been shown to dramatically improve forecast skill in hydrologic and meteorological applications, although such improvements are conditional on using bias-free observations and model simulations. A hydrologic model calibrated to a particular set of land cover conditions has the potential to produce biased simulations when the catchment is disturbed. This paper sheds new light on the impacts of bias or systematic errors in hydrologic data assimilation, in the context of forecasting in catchments with changing land surface conditions and a model calibrated to pre-change conditions. We posit that in such cases, the impact of systematic model errors on assimilation or forecast quality is dependent on the inherent prediction uncertainty that persists even in pre-change conditions. Through experiments on a range of catchments, we develop a conceptual relationship between total prediction uncertainty and the impacts of land cover changes on the hydrologic regime to demonstrate how forecast quality is affected when using state estimation Data Assimilation with no modifications to account for land cover changes. This work shows that systematic model errors as a result of changing or changed catchment conditions do not always necessitate adjustments to the modelling or assimilation methodology, for instance through re-calibration of the hydrologic model, time varying model parameters or revised offline/online bias estimation.

  15. The sensitivity of patient specific IMRT QC to systematic MLC leaf bank offset errors

    International Nuclear Information System (INIS)

    Rangel, Alejandra; Palte, Gesa; Dunscombe, Peter

    2010-01-01

    Purpose: Patient specific IMRT QC is performed routinely in many clinics as a safeguard against errors and inaccuracies which may be introduced during the complex planning, data transfer, and delivery phases of this type of treatment. The purpose of this work is to evaluate the feasibility of detecting systematic errors in MLC leaf bank position with patient specific checks. Methods: 9 head and neck (H and N) and 14 prostate IMRT beams were delivered using MLC files containing systematic offsets (±1 mm in two banks, ±0.5 mm in two banks, and 1 mm in one bank of leaves). The beams were measured using both MAPCHECK (Sun Nuclear Corp., Melbourne, FL) and the aS1000 electronic portal imaging device (Varian Medical Systems, Palo Alto, CA). Comparisons with calculated fields, without offsets, were made using commonly adopted criteria including absolute dose (AD) difference, relative dose difference, distance to agreement (DTA), and the gamma index. Results: The criteria most sensitive to systematic leaf bank offsets were the 3% AD, 3 mm DTA for MAPCHECK and the gamma index with 2% AD and 2 mm DTA for the EPID. The criterion based on the relative dose measurements was the least sensitive to MLC offsets. More highly modulated fields, i.e., H and N, showed greater changes in the percentage of passing points due to systematic MLC inaccuracy than prostate fields. Conclusions: None of the techniques or criteria tested is sufficiently sensitive, with the population of IMRT fields, to detect a systematic MLC offset at a clinically significant level on an individual field. Patient specific QC cannot, therefore, substitute for routine QC of the MLC itself.

  16. The sensitivity of patient specific IMRT QC to systematic MLC leaf bank offset errors

    Energy Technology Data Exchange (ETDEWEB)

    Rangel, Alejandra; Palte, Gesa; Dunscombe, Peter [Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2, Canada and Department of Physics and Astronomy, University of Calgary, 2500 University Drive North West, Calgary, Alberta T2N 1N4 (Canada); Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada); Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada); Department of Physics and Astronomy, University of Calgary, 2500 University Drive NW, Calgary, Alberta T2N 1N4 (Canada) and Department of Oncology, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada)

    2010-07-15

    Purpose: Patient specific IMRT QC is performed routinely in many clinics as a safeguard against errors and inaccuracies which may be introduced during the complex planning, data transfer, and delivery phases of this type of treatment. The purpose of this work is to evaluate the feasibility of detecting systematic errors in MLC leaf bank position with patient specific checks. Methods: 9 head and neck (H and N) and 14 prostate IMRT beams were delivered using MLC files containing systematic offsets ({+-}1 mm in two banks, {+-}0.5 mm in two banks, and 1 mm in one bank of leaves). The beams were measured using both MAPCHECK (Sun Nuclear Corp., Melbourne, FL) and the aS1000 electronic portal imaging device (Varian Medical Systems, Palo Alto, CA). Comparisons with calculated fields, without offsets, were made using commonly adopted criteria including absolute dose (AD) difference, relative dose difference, distance to agreement (DTA), and the gamma index. Results: The criteria most sensitive to systematic leaf bank offsets were the 3% AD, 3 mm DTA for MAPCHECK and the gamma index with 2% AD and 2 mm DTA for the EPID. The criterion based on the relative dose measurements was the least sensitive to MLC offsets. More highly modulated fields, i.e., H and N, showed greater changes in the percentage of passing points due to systematic MLC inaccuracy than prostate fields. Conclusions: None of the techniques or criteria tested is sufficiently sensitive, with the population of IMRT fields, to detect a systematic MLC offset at a clinically significant level on an individual field. Patient specific QC cannot, therefore, substitute for routine QC of the MLC itself.

  17. Preventing statistical errors in scientific journals.

    NARCIS (Netherlands)

    Nuijten, M.B.

    2016-01-01

    There is evidence for a high prevalence of statistical reporting errors in psychology and other scientific fields. These errors display a systematic preference for statistically significant results, distorting the scientific literature. There are several possible causes for this systematic error

  18. A permutation test to analyse systematic bias and random measurement errors of medical devices via boosting location and scale models.

    Science.gov (United States)

    Mayr, Andreas; Schmid, Matthias; Pfahlberg, Annette; Uter, Wolfgang; Gefeller, Olaf

    2017-06-01

    Measurement errors of medico-technical devices can be separated into systematic bias and random error. We propose a new method to address both simultaneously via generalized additive models for location, scale and shape (GAMLSS) in combination with permutation tests. More precisely, we extend a recently proposed boosting algorithm for GAMLSS to provide a test procedure to analyse potential device effects on the measurements. We carried out a large-scale simulation study to provide empirical evidence that our method is able to identify possible sources of systematic bias as well as random error under different conditions. Finally, we apply our approach to compare measurements of skin pigmentation from two different devices in an epidemiological study.

  19. Reliability and Measurement Error of Tensiomyography to Assess Mechanical Muscle Function: A Systematic Review.

    Science.gov (United States)

    Martín-Rodríguez, Saúl; Loturco, Irineu; Hunter, Angus M; Rodríguez-Ruiz, David; Munguia-Izquierdo, Diego

    2017-12-01

    Martín-Rodríguez, S, Loturco, I, Hunter, AM, Rodríguez-Ruiz, D, and Munguia-Izquierdo, D. Reliability and measurement error of tensiomyography to assess mechanical muscle function: A systematic review. J Strength Cond Res 31(12): 3524-3536, 2017-Interest in studying mechanical skeletal muscle function through tensiomyography (TMG) has increased in recent years. This systematic review aimed to (a) report the reliability and measurement error of all TMG parameters (i.e., maximum radial displacement of the muscle belly [Dm], contraction time [Tc], delay time [Td], half-relaxation time [½ Tr], and sustained contraction time [Ts]) and (b) to provide critical reflection on how to perform accurate and appropriate measurements for informing clinicians, exercise professionals, and researchers. A comprehensive literature search was performed of the Pubmed, Scopus, Science Direct, and Cochrane databases up to July 2017. Eight studies were included in this systematic review. Meta-analysis could not be performed because of the low quality of the evidence of some studies evaluated. Overall, the review of the 9 studies involving 158 participants revealed high relative reliability (intraclass correlation coefficient [ICC]) for Dm (0.91-0.99); moderate-to-high ICC for Ts (0.80-0.96), Tc (0.70-0.98), and ½ Tr (0.77-0.93); and low-to-high ICC for Td (0.60-0.98), independently of the evaluated muscles. In addition, absolute reliability (coefficient of variation [CV]) was low for all TMG parameters except for ½ Tr (CV = >20%), whereas measurement error indexes were high for this parameter. In conclusion, this study indicates that 3 of the TMG parameters (Dm, Td, and Tc) are highly reliable, whereas ½ Tr demonstrate insufficient reliability, and thus should not be used in future studies.

  20. SU-D-BRD-07: Evaluation of the Effectiveness of Statistical Process Control Methods to Detect Systematic Errors For Routine Electron Energy Verification

    International Nuclear Information System (INIS)

    Parker, S

    2015-01-01

    Purpose: To evaluate the ability of statistical process control methods to detect systematic errors when using a two dimensional (2D) detector array for routine electron beam energy verification. Methods: Electron beam energy constancy was measured using an aluminum wedge and a 2D diode array on four linear accelerators. Process control limits were established. Measurements were recorded in control charts and compared with both calculated process control limits and TG-142 recommended specification limits. The data was tested for normality, process capability and process acceptability. Additional measurements were recorded while systematic errors were intentionally introduced. Systematic errors included shifts in the alignment of the wedge, incorrect orientation of the wedge, and incorrect array calibration. Results: Control limits calculated for each beam were smaller than the recommended specification limits. Process capability and process acceptability ratios were greater than one in all cases. All data was normally distributed. Shifts in the alignment of the wedge were most apparent for low energies. The smallest shift (0.5 mm) was detectable using process control limits in some cases, while the largest shift (2 mm) was detectable using specification limits in only one case. The wedge orientation tested did not affect the measurements as this did not affect the thickness of aluminum over the detectors of interest. Array calibration dependence varied with energy and selected array calibration. 6 MeV was the least sensitive to array calibration selection while 16 MeV was the most sensitive. Conclusion: Statistical process control methods demonstrated that the data distribution was normally distributed, the process was capable of meeting specifications, and that the process was centered within the specification limits. Though not all systematic errors were distinguishable from random errors, process control limits increased the ability to detect systematic errors

  1. Using Analysis Increments (AI) to Estimate and Correct Systematic Errors in the Global Forecast System (GFS) Online

    Science.gov (United States)

    Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.

    2017-12-01

    Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub

  2. Design of a real-time spectroscopic rotating compensator ellipsometer without systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Broch, Laurent, E-mail: laurent.broch@univ-lorraine.fr [Laboratoire de Chimie Physique-Approche Multi-echelle des Milieux Complexes (LCP-A2MC, EA 4632), Universite de Lorraine, 1 boulevard Arago CP 87811, F-57078 Metz Cedex 3 (France); Stein, Nicolas [Institut Jean Lamour, Universite de Lorraine, UMR 7198 CNRS, 1 boulevard Arago CP 87811, F-57078 Metz Cedex 3 (France); Zimmer, Alexandre [Laboratoire Interdisciplinaire Carnot de Bourgogne, UMR 6303 CNRS, Universite de Bourgogne, 9 avenue Alain Savary BP 47870, F-21078 Dijon Cedex (France); Battie, Yann; Naciri, Aotmane En [Laboratoire de Chimie Physique-Approche Multi-echelle des Milieux Complexes (LCP-A2MC, EA 4632), Universite de Lorraine, 1 boulevard Arago CP 87811, F-57078 Metz Cedex 3 (France)

    2014-11-28

    We describe a spectroscopic ellipsometer in the visible domain (400–800 nm) based on a rotating compensator technology using two detectors. The classical analyzer is replaced by a fixed Rochon birefringent beamsplitter which splits the incidence light wave into two perpendicularly polarized waves, one oriented at + 45° and the other one at − 45° according to the plane of incidence. Both emergent optical signals are analyzed by two identical CCD detectors which are synchronized by an optical encoder fixed on the shaft of the step-by-step motor of the compensator. The final spectrum is the result of the two averaged Ψ and Δ spectra acquired by both detectors. We show that Ψ and Δ spectra are acquired without systematic errors on a spectral range fixed from 400 to 800 nm. The acquisition time can be adjusted down to 25 ms. The setup was validated by monitoring the first steps of bismuth telluride film electrocrystallization. The results exhibit that induced experimental growth parameters, such as film thickness and volumic fraction of deposited material can be extracted with a better trueness. - Highlights: • High-speed rotating compensator ellipsometer equipped with 2 detectors. • Ellipsometric angles without systematic errors • In-situ monitoring of electrocrystallization of bismuth telluride thin layer • High-accuracy of fitted physical parameters.

  3. Galaxy Cluster Shapes and Systematic Errors in H_0 as Determined by the Sunyaev-Zel'dovich Effect

    Science.gov (United States)

    Sulkanen, Martin E.; Patel, Sandeep K.

    1998-01-01

    Imaging of the Sunyaev-Zeldovich (SZ) effect in galaxy clusters combined with cluster plasma x-ray diagnostics promises to measure the cosmic distance scale to high accuracy. However, projecting the inverse-Compton scattering and x-ray emission along the cluster line-of-sight will introduce systematic error's in the Hubble constant, H_0, because the true shape of the cluster is not known. In this paper we present a study of the systematic errors in the value of H_0, as determined by the x-ray and SZ properties of theoretical samples of triaxial isothermal "beta-model" clusters, caused by projection effects and observer orientation relative to the model clusters' principal axes. We calculate three estimates for H_0 for each cluster, based on their large and small apparent angular core radii, and their arithmetic mean. We average the estimates for H_0 for a sample of 25 clusters and find that the estimates have limited systematic error: the 99.7% confidence intervals for the mean estimated H_0 analyzing the clusters using either their large or mean angular core r;dius are within 14% of the "true" (assumed) value of H_0 (and enclose it), for a triaxial beta model cluster sample possessing a distribution of apparent x-ray cluster ellipticities consistent with that of observed x-ray clusters.

  4. Adverse Drug Events and Medication Errors in African Hospitals: A Systematic Review.

    Science.gov (United States)

    Mekonnen, Alemayehu B; Alhawassi, Tariq M; McLachlan, Andrew J; Brien, Jo-Anne E

    2018-03-01

    Medication errors and adverse drug events are universal problems contributing to patient harm but the magnitude of these problems in Africa remains unclear. The objective of this study was to systematically investigate the literature on the extent of medication errors and adverse drug events, and the factors contributing to medication errors in African hospitals. We searched PubMed, MEDLINE, EMBASE, Web of Science and Global Health databases from inception to 31 August, 2017 and hand searched the reference lists of included studies. Original research studies of any design published in English that investigated adverse drug events and/or medication errors in any patient population in the hospital setting in Africa were included. Descriptive statistics including median and interquartile range were presented. Fifty-one studies were included; of these, 33 focused on medication errors, 15 on adverse drug events, and three studies focused on medication errors and adverse drug events. These studies were conducted in nine (of the 54) African countries. In any patient population, the median (interquartile range) percentage of patients reported to have experienced any suspected adverse drug event at hospital admission was 8.4% (4.5-20.1%), while adverse drug events causing admission were reported in 2.8% (0.7-6.4%) of patients but it was reported that a median of 43.5% (20.0-47.0%) of the adverse drug events were deemed preventable. Similarly, the median mortality rate attributed to adverse drug events was reported to be 0.1% (interquartile range 0.0-0.3%). The most commonly reported types of medication errors were prescribing errors, occurring in a median of 57.4% (interquartile range 22.8-72.8%) of all prescriptions and a median of 15.5% (interquartile range 7.5-50.6%) of the prescriptions evaluated had dosing problems. Major contributing factors for medication errors reported in these studies were individual practitioner factors (e.g. fatigue and inadequate knowledge

  5. Dosimetric implications of inter- and intrafractional prostate positioning errors during tomotherapy : Comparison of gold marker-based registrations with native MVCT.

    Science.gov (United States)

    Wust, Peter; Joswig, Marc; Graf, Reinhold; Böhmer, Dirk; Beck, Marcus; Barelkowski, Thomasz; Budach, Volker; Ghadjar, Pirus

    2017-09-01

    For high-dose radiation therapy (RT) of prostate cancer, image-guided (IGRT) and intensity-modulated RT (IMRT) approaches are standard. Less is known regarding comparisons of different IGRT techniques and the resulting residual errors, as well as regarding their influences on dose distributions. A total of 58 patients who received tomotherapy-based RT up to 84 Gy for high-risk prostate cancer underwent IGRT based either on daily megavoltage CT (MVCT) alone (n = 43) or the additional use of gold markers (n = 15) under routine conditions. Planned Adaptive (Accuray Inc., Madison, WI, USA) software was used for elaborated offline analysis to quantify residual interfractional prostate positioning errors, along with systematic and random errors and the resulting safety margins after both IGRT approaches. Dosimetric parameters for clinical target volume (CTV) coverage and exposition of organs at risk (OAR) were also analyzed and compared. Interfractional as well as intrafractional displacements were determined. Particularly in the vertical direction, residual interfractional positioning errors were reduced using the gold marker-based approach, but dosimetric differences were moderate and the clinical relevance relatively small. Intrafractional prostate motion proved to be quite high, with displacements of 1-3 mm; however, these did not result in additional dosimetric impairments. Residual interfractional positioning errors were reduced using gold marker-based IGRT; however, this resulted in only slightly different final dose distributions. Therefore, daily MVCT-based IGRT without markers might be a valid alternative.

  6. Average beta-beating from random errors

    CERN Document Server

    Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department

    2018-01-01

    The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic effect on the tune.

  7. Carers' Medication Administration Errors in the Domiciliary Setting: A Systematic Review.

    Directory of Open Access Journals (Sweden)

    Anam Parand

    Full Text Available Medications are mostly taken in patients' own homes, increasingly administered by carers, yet studies of medication safety have been largely conducted in the hospital setting. We aimed to review studies of how carers cause and/or prevent medication administration errors (MAEs within the patient's home; to identify types, prevalence and causes of these MAEs and any interventions to prevent them.A narrative systematic review of literature published between 1 Jan 1946 and 23 Sep 2013 was carried out across the databases EMBASE, MEDLINE, PSYCHINFO, COCHRANE and CINAHL. Empirical studies were included where carers were responsible for preventing/causing MAEs in the home and standardised tools used for data extraction and quality assessment.Thirty-six papers met the criteria for narrative review, 33 of which included parents caring for children, two predominantly comprised adult children and spouses caring for older parents/partners, and one focused on paid carers mostly looking after older adults. The carer administration error rate ranged from 1.9 to 33% of medications administered and from 12 to 92.7% of carers administering medication. These included dosage errors, omitted administration, wrong medication and wrong time or route of administration. Contributory factors included individual carer factors (e.g. carer age, environmental factors (e.g. storage, medication factors (e.g. number of medicines, prescription communication factors (e.g. comprehensibility of instructions, psychosocial factors (e.g. carer-to-carer communication, and care-recipient factors (e.g. recipient age. The few interventions effective in preventing MAEs involved carer training and tailored equipment.This review shows that home medication administration errors made by carers are a potentially serious patient safety issue. Carers made similar errors to those made by professionals in other contexts and a wide variety of contributory factors were identified. The home care

  8. Evaluating IMRT and VMAT dose accuracy: Practical examples of failure to detect systematic errors when applying a commonly used metric and action levels

    Energy Technology Data Exchange (ETDEWEB)

    Nelms, Benjamin E. [Canis Lupus LLC, Merrimac, Wisconsin 53561 (United States); Chan, Maria F. [Memorial Sloan-Kettering Cancer Center, Basking Ridge, New Jersey 07920 (United States); Jarry, Geneviève; Lemire, Matthieu [Hôpital Maisonneuve-Rosemont, Montréal, QC H1T 2M4 (Canada); Lowden, John [Indiana University Health - Goshen Hospital, Goshen, Indiana 46526 (United States); Hampton, Carnell [Levine Cancer Institute/Carolinas Medical Center, Concord, North Carolina 28025 (United States); Feygelman, Vladimir [Moffitt Cancer Center, Tampa, Florida 33612 (United States)

    2013-11-15

    Purpose: This study (1) examines a variety of real-world cases where systematic errors were not detected by widely accepted methods for IMRT/VMAT dosimetric accuracy evaluation, and (2) drills-down to identify failure modes and their corresponding means for detection, diagnosis, and mitigation. The primary goal of detailing these case studies is to explore different, more sensitive methods and metrics that could be used more effectively for evaluating accuracy of dose algorithms, delivery systems, and QA devices.Methods: The authors present seven real-world case studies representing a variety of combinations of the treatment planning system (TPS), linac, delivery modality, and systematic error type. These case studies are typical to what might be used as part of an IMRT or VMAT commissioning test suite, varying in complexity. Each case study is analyzed according to TG-119 instructions for gamma passing rates and action levels for per-beam and/or composite plan dosimetric QA. Then, each case study is analyzed in-depth with advanced diagnostic methods (dose profile examination, EPID-based measurements, dose difference pattern analysis, 3D measurement-guided dose reconstruction, and dose grid inspection) and more sensitive metrics (2% local normalization/2 mm DTA and estimated DVH comparisons).Results: For these case studies, the conventional 3%/3 mm gamma passing rates exceeded 99% for IMRT per-beam analyses and ranged from 93.9% to 100% for composite plan dose analysis, well above the TG-119 action levels of 90% and 88%, respectively. However, all cases had systematic errors that were detected only by using advanced diagnostic techniques and more sensitive metrics. The systematic errors caused variable but noteworthy impact, including estimated target dose coverage loss of up to 5.5% and local dose deviations up to 31.5%. Types of errors included TPS model settings, algorithm limitations, and modeling and alignment of QA phantoms in the TPS. Most of the errors were

  9. Dosimetric impact of systematic MLC positional errors on step and shoot IMRT for prostate cancer: a planning study

    International Nuclear Information System (INIS)

    Ung, N.M.; Wee, L.; Harper, C.S.

    2010-01-01

    Full text: The positional accuracy of multi leaf collimators (MLC) is crucial in ensuring precise delivery of intensity-modulated radiotherapy (IMRT). The aim of this planning study was to investigate the dosimetric impact of systematic MLC errors on step and shoot IMRT of prostate cancer. Twelve MLC leaf banks perturbations were introduced to six prostate IMRT treatment plans to simulate MLC systematic errors. Dose volume histograms (OYHs) were generated for the extraction of dose endpoint parameters. Plans were evaluated in terms of changes to the defined endpoint dose parameters, conformity index (CI) and healthy tissue avoidance (HTA) to planning target volume (PTY), rectum and bladder. Negative perturbations of MLC had been found to produce greater changes to endpoint dose parameters than positive perturbations of MLC (p < 0.05). Negative and positive synchronized MLC perturbations of I mm resulted in median changes of -2.32 and 1.78%, respectively to 095% of PTY whereas asynchronized MLC perturbations of the same direction and magnitude resulted in median changes of 1.18 and 0.90%, respectively. Doses to rectum were generally more sensitive to systematic MLC errors compared to bladder. Synchronized MLC perturbations of I mm resulted in median changes of endpoint dose parameters to both rectum and bladder from about I to 3%. Maximum reduction of -4.44 and -7.29% were recorded for CI and HTA, respectively, due to synchronized MLC perturbation of I mm. In summary, MLC errors resulted in measurable amount of dose changes to PTY and surrounding critical structures in prostate LMRT. (author)

  10. Methods, analysis, and the treatment of systematic errors for the electron electric dipole moment search in thorium monoxide

    Science.gov (United States)

    Baron, J.; Campbell, W. C.; DeMille, D.; Doyle, J. M.; Gabrielse, G.; Gurevich, Y. V.; Hess, P. W.; Hutzler, N. R.; Kirilov, E.; Kozyryev, I.; O'Leary, B. R.; Panda, C. D.; Parsons, M. F.; Spaun, B.; Vutha, A. C.; West, A. D.; West, E. P.; ACME Collaboration

    2017-07-01

    We recently set a new limit on the electric dipole moment of the electron (eEDM) (J Baron et al and ACME collaboration 2014 Science 343 269-272), which represented an order-of-magnitude improvement on the previous limit and placed more stringent constraints on many charge-parity-violating extensions to the standard model. In this paper we discuss the measurement in detail. The experimental method and associated apparatus are described, together with the techniques used to isolate the eEDM signal. In particular, we detail the way experimental switches were used to suppress effects that can mimic the signal of interest. The methods used to search for systematic errors, and models explaining observed systematic errors, are also described. We briefly discuss possible improvements to the experiment.

  11. Assessing systematic errors in GOSAT CO2 retrievals by comparing assimilated fields to independent CO2 data

    Science.gov (United States)

    Baker, D. F.; Oda, T.; O'Dell, C.; Wunch, D.; Jacobson, A. R.; Yoshida, Y.; Partners, T.

    2012-12-01

    Measurements of column CO2 concentration from space are now being taken at a spatial and temporal density that permits regional CO2 sources and sinks to be estimated. Systematic errors in the satellite retrievals must be minimized for these estimates to be useful, however. CO2 retrievals from the TANSO instrument aboard the GOSAT satellite are compared to similar column retrievals from the Total Carbon Column Observing Network (TCCON) as the primary method of validation; while this is a powerful approach, it can only be done for overflights of 10-20 locations and has not, for example, permitted validation of GOSAT data over the oceans or deserts. Here we present a complementary approach that uses a global atmospheric transport model and flux inversion method to compare different types of CO2 measurements (GOSAT, TCCON, surface in situ, and aircraft) at different locations, at the cost of added transport error. The measurements from any single type of data are used in a variational carbon data assimilation method to optimize surface CO2 fluxes (with a CarbonTracker prior), then the corresponding optimized CO2 concentration fields are compared to those data types not inverted, using the appropriate vertical weighting. With this approach, we find that GOSAT column CO2 retrievals from the ACOS project (version 2.9 and 2.10) contain systematic errors that make the modeled fit to the independent data worse. However, we find that the differences between the GOSAT data and our prior model are correlated with certain physical variables (aerosol amount, surface albedo, correction to total column mass) that are likely driving errors in the retrievals, independent of CO2 concentration. If we correct the GOSAT data using a fit to these variables, then we find the GOSAT data to improve the fit to independent CO2 data, which suggests that the useful information in the measurements outweighs the negative impact of the remaining systematic errors. With this assurance, we compare

  12. Estima de error residual explícita para cantidades de interés utilizando funciones burbuja

    OpenAIRE

    Rosales, R.; Díez, P.

    2009-01-01

    En este trabajo se introduce un nuevo estimador de error residual explícito a posteriori para problemas elípticos orientado a cantidades de interés. Se propone utilizar funciones burbuja sobre elementos y sobre aristas. Se parte de la solución de elementos finitos del problema primal y de la de un problema junto (o dual), asociado a una cantidad de interés definida por el usuario. Por ejemplo, la variación de temperatura o el desplazamiento de un punto del dominio. La estima se calcula en...

  13. A new calibration model for pointing a radio telescope that considers nonlinear errors in the azimuth axis

    International Nuclear Information System (INIS)

    Kong De-Qing; Wang Song-Gen; Zhang Hong-Bo; Wang Jin-Qing; Wang Min

    2014-01-01

    A new calibration model of a radio telescope that includes pointing error is presented, which considers nonlinear errors in the azimuth axis. For a large radio telescope, in particular for a telescope with a turntable, it is difficult to correct pointing errors using a traditional linear calibration model, because errors produced by the wheel-on-rail or center bearing structures are generally nonlinear. Fourier expansion is made for the oblique error and parameters describing the inclination direction along the azimuth axis based on the linear calibration model, and a new calibration model for pointing is derived. The new pointing model is applied to the 40m radio telescope administered by Yunnan Observatories, which is a telescope that uses a turntable. The results show that this model can significantly reduce the residual systematic errors due to nonlinearity in the azimuth axis compared with the linear model

  14. An Examination of the Spatial Distribution of Carbon Dioxide and Systematic Errors

    Science.gov (United States)

    Coffey, Brennan; Gunson, Mike; Frankenberg, Christian; Osterman, Greg

    2011-01-01

    The industrial period and modern age is characterized by combustion of coal, oil, and natural gas for primary energy and transportation leading to rising levels of atmospheric of CO2. This increase, which is being carefully measured, has ramifications throughout the biological world. Through remote sensing, it is possible to measure how many molecules of CO2 lie in a defined column of air. However, other gases and particles are present in the atmosphere, such as aerosols and water, which make such measurements more complicated1. Understanding the detailed geometry and path length of the observation is vital to computing the concentration of CO2. Comparing these satellite readings with ground-truth data (TCCON) the systematic errors arising from these sources can be assessed. Once the error is understood, it can be scaled for in the retrieval algorithms to create a set of data, which is closer to the TCCON measurements1. Using this process, the algorithms are being developed to reduce bias, within.1% worldwide of the true value. At this stage, the accuracy is within 1%, but through correcting small errors contained in the algorithms, such as accounting for the scattering of sunlight, the desired accuracy can be achieved.

  15. Unaccounted source of systematic errors in measurements of the Newtonian gravitational constant G

    International Nuclear Information System (INIS)

    DeSalvo, Riccardo

    2015-01-01

    Many precision measurements of G have produced a spread of results incompatible with measurement errors. Clearly an unknown source of systematic errors is at work. It is proposed here that most of the discrepancies derive from subtle deviations from Hooke's law, caused by avalanches of entangled dislocations. The idea is supported by deviations from linearity reported by experimenters measuring G, similarly to what is observed, on a larger scale, in low-frequency spring oscillators. Some mitigating experimental apparatus modifications are suggested. - Highlights: • Source of discrepancies on universal gravitational constant G measurements. • Collective motion of dislocations results in breakdown of Hook's law. • Self-organized criticality produce non-predictive shifts of equilibrium point. • New dissipation mechanism different from loss angle and viscous models is necessary. • Mitigation measures proposed may bring coherence to the measurements of G

  16. Error management process for power stations

    International Nuclear Information System (INIS)

    Hirotsu, Yuko; Takeda, Daisuke; Fujimoto, Junzo; Nagasaka, Akihiko

    2016-01-01

    The purpose of this study is to establish 'error management process for power stations' for systematizing activities for human error prevention and for festering continuous improvement of these activities. The following are proposed by deriving concepts concerning error management process from existing knowledge and realizing them through application and evaluation of their effectiveness at a power station: an entire picture of error management process that facilitate four functions requisite for maraging human error prevention effectively (1. systematizing human error prevention tools, 2. identifying problems based on incident reports and taking corrective actions, 3. identifying good practices and potential problems for taking proactive measures, 4. prioritizeng human error prevention tools based on identified problems); detail steps for each activity (i.e. developing an annual plan for human error prevention, reporting and analyzing incidents and near misses) based on a model of human error causation; procedures and example of items for identifying gaps between current and desired levels of executions and outputs of each activity; stages for introducing and establishing the above proposed error management process into a power station. By giving shape to above proposals at a power station, systematization and continuous improvement of activities for human error prevention in line with the actual situation of the power station can be expected. (author)

  17. Comparison of two stochastic techniques for reliable urban runoff prediction by modeling systematic errors

    DEFF Research Database (Denmark)

    Del Giudice, Dario; Löwe, Roland; Madsen, Henrik

    2015-01-01

    from different fields and have not yet been compared in environmental modeling. To compare the two approaches, we develop a unifying terminology, evaluate them theoretically, and apply them to conceptual rainfall-runoff modeling in the same drainage system. Our results show that both approaches can......In urban rainfall-runoff, commonly applied statistical techniques for uncertainty quantification mostly ignore systematic output errors originating from simplified models and erroneous inputs. Consequently, the resulting predictive uncertainty is often unreliable. Our objective is to present two...... approaches which use stochastic processes to describe systematic deviations and to discuss their advantages and drawbacks for urban drainage modeling. The two methodologies are an external bias description (EBD) and an internal noise description (IND, also known as stochastic gray-box modeling). They emerge...

  18. Comparison of orthogonal kilovolt X-ray images and cone-beam CT matching results in setup error assessment and correction for EB-PBI during free breathing

    International Nuclear Information System (INIS)

    Wang Wei; Li Jianbin; Hu Hongguang; Ma Zhifang; Xu Min; Fan Tingyong; Shao Qian; Ding Yun

    2014-01-01

    Objective: To compare the differences in setup error (SE) assessment and correction between the orthogonal kilovolt X-ray images and CBCT in EB-PBI patients during free breathing. Methods: Nineteen patients after breast conserving surgery EB-PBI were recruited. Interfraction SE was acquired using orthogonal kilovolt X-ray setup images and CBCT, after on-line setup correction,calculate the residual error and compare the SE, residual error and setup margin (SM) quantified for orthogonal kilovolt X-ray images and CBCT. Wilcoxon sign-rank test was used to evaluate the differences. Results: The CBCT based SE (systematic error, ∑) was smaller than the orthogonal kilovolt X-ray images based ∑ in AP direction (-1.2 mm vs 2.00 mm; P=0.005), and there was no statistically significant differences for three dimensional directions in random error (σ) (P=0.948, 0.376, 0.314). After on-line setup correction,CBCT decreases setup residual error than the orthogonal kilovolt X-ray images in AP direction (Σ: -0.20 mm vs 0.50 mm, P=0.008; σ: 0.45 mm vs 1.34 mm, P=0.002). And also the CBCT based SM was smaller than orthogonal kilovolt X-ray images based SM in AP direction (Σ: -1.39 mm vs 5.57 mm, P=0.003; σ: 0.00 mm vs 3.2 mm, P=0.003). Conclusions: Compared with kilovolt X-ray images, CBCT underestimate the setup error in the AP direction, but decreases setup residual error significantly.An image-guided radiotherapy and setup error assessment using kilovolt X-ray images for EB-PBI plans was feasible. (authors)

  19. Measurement properties and usability of non-contact scanners for measuring transtibial residual limb volume.

    Science.gov (United States)

    Kofman, Rianne; Beekman, Anna M; Emmelot, Cornelis H; Geertzen, Jan H B; Dijkstra, Pieter U

    2018-06-01

    Non-contact scanners may have potential for measurement of residual limb volume. Different non-contact scanners have been introduced during the last decades. Reliability and usability (practicality and user friendliness) should be assessed before introducing these systems in clinical practice. The aim of this study was to analyze the measurement properties and usability of four non-contact scanners (TT Design, Omega Scanner, BioSculptor Bioscanner, and Rodin4D Scanner). Quasi experimental. Nine (geometric and residual limb) models were measured on two occasions, each consisting of two sessions, thus in total 4 sessions. In each session, four observers used the four systems for volume measurement. Mean for each model, repeatability coefficients for each system, variance components, and their two-way interactions of measurement conditions were calculated. User satisfaction was evaluated with the Post-Study System Usability Questionnaire. Systematic differences between the systems were found in volume measurements. Most of the variances were explained by the model (97%), while error variance was 3%. Measurement system and the interaction between system and model explained 44% of the error variance. Repeatability coefficient of the systems ranged from 0.101 (Omega Scanner) to 0.131 L (Rodin4D). Differences in Post-Study System Usability Questionnaire scores between the systems were small and not significant. The systems were reliable in determining residual limb volume. Measurement systems and the interaction between system and residual limb model explained most of the error variances. The differences in repeatability coefficient and usability between the four CAD/CAM systems were small. Clinical relevance If accurate measurements of residual limb volume are required (in case of research), modern non-contact scanners should be taken in consideration nowadays.

  20. Avoiding a Systematic Error in Assessing Fat Graft Survival in the Breast with Repeated Magnetic Resonance Imaging

    DEFF Research Database (Denmark)

    Glovinski, Peter Viktor; Herly, Mikkel; Müller, Felix C

    2016-01-01

    Several techniques for measuring breast volume (BV) are based on examining the breast on magnetic resonance imaging. However, when techniques designed to measure total BV are used to quantify BV changes, for example, after fat grafting, a systematic error is introduced because BV changes lead to ...

  1. Using residual stacking to mitigate site-specific errors in order to improve the quality of GNSS-based coordinate time series of CORS

    Science.gov (United States)

    Knöpfler, Andreas; Mayer, Michael; Heck, Bernhard

    2014-05-01

    Within the last decades, positioning using GNSS (Global Navigation Satellite Systems; e.g., GPS) has become a standard tool in many (geo-) sciences. The positioning methods Precise Point Positioning and differential point positioning based on carrier phase observations have been developed for a broad variety of applications with different demands for example on accuracy. In high precision applications, a lot of effort was invested to mitigate different error sources: the products for satellite orbits and satellite clocks were improved; the misbehaviour of satellite and receiver antennas compared to an ideal antenna is modelled by calibration values on absolute level, the modelling of the ionosphere and the troposphere is updated year by year. Therefore, within processing of data of CORS (continuously operating reference sites), equipped with geodetic hardware using a sophisticated strategy, the latest products and models nowadays enable positioning accuracies at low mm level. Despite the considerable improvements that have been achieved within GNSS data processing, a generally valid multipath model is still lacking. Therefore, site specific multipath still represents a major error source in precise GNSS positioning. Furthermore, the calibration information of receiving GNSS antennas, which is for instance derived by a robot or chamber calibration, is valid strictly speaking only for the location of the calibration. The calibrated antenna can show a slightly different behaviour at the CORS due to near field multipath effects. One very promising strategy to mitigate multipath effects as well as imperfectly calibrated receiver antennas is to stack observation residuals of several days, thereby, multipath-loaded observation residuals are analysed for example with respect to signal direction, to find and reduce systematic constituents. This presentation will give a short overview about existing stacking approaches. In addition, first results of the stacking approach

  2. Residual position errors of lymph node surrogates in breast cancer adjuvant radiotherapy: Comparison of two arm fixation devices and the effect of arm position correction

    International Nuclear Information System (INIS)

    Kapanen, Mika; Laaksomaa, Marko; Skyttä, Tanja; Haltamo, Mikko; Pehkonen, Jani; Lehtonen, Turkka; Kellokumpu-Lehtinen, Pirkko-Liisa; Hyödynmaa, Simo

    2016-01-01

    Residual position errors of the lymph node (LN) surrogates and humeral head (HH) were determined for 2 different arm fixation devices in radiotherapy (RT) of breast cancer: a standard wrist-hold (WH) and a house-made rod-hold (RH). The effect of arm position correction (APC) based on setup images was also investigated. A total of 113 consecutive patients with early-stage breast cancer with LN irradiation were retrospectively analyzed (53 and 60 using the WH and RH, respectively). Residual position errors of the LN surrogates (Th1-2 and clavicle) and the HH were investigated to compare the 2 fixation devices. The position errors and setup margins were determined before and after the APC to investigate the efficacy of the APC in the treatment situation. A threshold of 5 mm was used for the residual errors of the clavicle and Th1-2 to perform the APC, and a threshold of 7 mm was used for the HH. The setup margins were calculated with the van Herk formula. Irradiated volumes of the HH were determined from RT treatment plans. With the WH and the RH, setup margins up to 8.1 and 6.7 mm should be used for the LN surrogates, and margins up to 4.6 and 3.6 mm should be used to spare the HH, respectively, without the APC. After the APC, the margins of the LN surrogates were equal to or less than 7.5/6.0 mm with the WH/RH, but margins up to 4.2/2.9 mm were required for the HH. The APC was needed at least once with both the devices for approximately 60% of the patients. With the RH, irradiated volume of the HH was approximately 2 times more than with the WH, without any dose constraints. Use of the RH together with the APC resulted in minimal residual position errors and setup margins for all the investigated bony landmarks. Based on the obtained results, we prefer the house-made RH. However, more attention should be given to minimize the irradiation of the HH with the RH than with the WH.

  3. Causes of medication administration errors in hospitals: a systematic review of quantitative and qualitative evidence.

    Science.gov (United States)

    Keers, Richard N; Williams, Steven D; Cooke, Jonathan; Ashcroft, Darren M

    2013-11-01

    Underlying systems factors have been seen to be crucial contributors to the occurrence of medication errors. By understanding the causes of these errors, the most appropriate interventions can be designed and implemented to minimise their occurrence. This study aimed to systematically review and appraise empirical evidence relating to the causes of medication administration errors (MAEs) in hospital settings. Nine electronic databases (MEDLINE, EMBASE, International Pharmaceutical Abstracts, ASSIA, PsycINFO, British Nursing Index, CINAHL, Health Management Information Consortium and Social Science Citations Index) were searched between 1985 and May 2013. Inclusion and exclusion criteria were applied to identify eligible publications through title analysis followed by abstract and then full text examination. English language publications reporting empirical data on causes of MAEs were included. Reference lists of included articles and relevant review papers were hand searched for additional studies. Studies were excluded if they did not report data on specific MAEs, used accounts from individuals not directly involved in the MAE concerned or were presented as conference abstracts with insufficient detail. A total of 54 unique studies were included. Causes of MAEs were categorised according to Reason's model of accident causation. Studies were assessed to determine relevance to the research question and how likely the results were to reflect the potential underlying causes of MAEs based on the method(s) used. Slips and lapses were the most commonly reported unsafe acts, followed by knowledge-based mistakes and deliberate violations. Error-provoking conditions influencing administration errors included inadequate written communication (prescriptions, documentation, transcription), problems with medicines supply and storage (pharmacy dispensing errors and ward stock management), high perceived workload, problems with ward-based equipment (access, functionality

  4. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  5. Increased errors and decreased performance at night: A systematic review of the evidence concerning shift work and quality.

    Science.gov (United States)

    de Cordova, Pamela B; Bradford, Michelle A; Stone, Patricia W

    2016-02-15

    Shift workers have worse health outcomes than employees who work standard business hours. However, it is unclear how this poorer health shift may be related to employee work productivity. The purpose of this systematic review is to assess the relationship between shift work and errors and performance. Searches of MEDLINE/PubMed, EBSCOhost, and CINAHL were conducted to identify articles that examined the relationship between shift work, errors, quality, productivity, and performance. All articles were assessed for study quality. A total of 435 abstracts were screened with 13 meeting inclusion criteria. Eight studies were rated to be of strong, methodological quality. Nine studies demonstrated a positive relationship that night shift workers committed more errors and had decreased performance. Night shift workers have worse health that may contribute to errors and decreased performance in the workplace.

  6. On the Correspondence between Mean Forecast Errors and Climate Errors in CMIP5 Models

    Energy Technology Data Exchange (ETDEWEB)

    Ma, H. -Y.; Xie, S.; Klein, S. A.; Williams, K. D.; Boyle, J. S.; Bony, S.; Douville, H.; Fermepin, S.; Medeiros, B.; Tyteca, S.; Watanabe, M.; Williamson, D.

    2014-02-01

    The present study examines the correspondence between short- and long-term systematic errors in five atmospheric models by comparing the 16 five-day hindcast ensembles from the Transpose Atmospheric Model Intercomparison Project II (Transpose-AMIP II) for July–August 2009 (short term) to the climate simulations from phase 5 of the Coupled Model Intercomparison Project (CMIP5) and AMIP for the June–August mean conditions of the years of 1979–2008 (long term). Because the short-term hindcasts were conducted with identical climate models used in the CMIP5/AMIP simulations, one can diagnose over what time scale systematic errors in these climate simulations develop, thus yielding insights into their origin through a seamless modeling approach. The analysis suggests that most systematic errors of precipitation, clouds, and radiation processes in the long-term climate runs are present by day 5 in ensemble average hindcasts in all models. Errors typically saturate after few days of hindcasts with amplitudes comparable to the climate errors, and the impacts of initial conditions on the simulated ensemble mean errors are relatively small. This robust bias correspondence suggests that these systematic errors across different models likely are initiated by model parameterizations since the atmospheric large-scale states remain close to observations in the first 2–3 days. However, biases associated with model physics can have impacts on the large-scale states by day 5, such as zonal winds, 2-m temperature, and sea level pressure, and the analysis further indicates a good correspondence between short- and long-term biases for these large-scale states. Therefore, improving individual model parameterizations in the hindcast mode could lead to the improvement of most climate models in simulating their climate mean state and potentially their future projections.

  7. Dosimetric impact of systematic MLC positional errors on step and shoot IMRT for prostate cancer: a planning study

    International Nuclear Information System (INIS)

    Ung, N.M.; Harper, C.S.; Wee, L.

    2011-01-01

    Full text: The positional accuracy of multileaf collimators (MLC) is crucial in ensuring precise delivery of intensity-modulated radiotherapy (IMRT). The aim of this planning study was to investigate the dosimetric impact of systematic MLC positional errors on step and shoot IMRT of prostate cancer. A total of 12 perturbations of MLC leaf banks were introduced to six prostate IMRT treatment plans to simulate MLC systematic positional errors. Dose volume histograms (DVHs) were generated for the extraction of dose endpoint parameters. Plans were evaluated in terms of changes to the defined endpoint dose parameters, conformity index (CI) and healthy tissue avoidance (HTA) to planning target volume (PTV), rectum and bladder. Negative perturbations of MLC had been found to produce greater changes to endpoint dose parameters than positive perturbations of MLC (p 9 5 of -1.2 and 0.9% respectively. Negative and positive synchronised MLC perturbations of I mm in one direction resulted in median changes in D 9 5 of -2.3 and 1.8% respectively. Doses to rectum were generally more sensitive to systematic MLC en-ors compared to bladder (p < 0.01). Negative and positive synchronised MLC perturbations of I mm in one direction resulted in median changes in endpoint dose parameters of rectum and bladder from 1.0 to 2.5%. Maximum reduction of -4.4 and -7.3% were recorded for conformity index (CI) and healthy tissue avoidance (HT A) respectively due to synchronised MLC perturbation of 1 mm. MLC errors resulted in dosimetric changes in IMRT plans for prostate. (author)

  8. Nature versus nurture: A systematic approach to elucidate gene-environment interactions in the development of myopic refractive errors.

    Science.gov (United States)

    Miraldi Utz, Virginia

    2017-01-01

    Myopia is the most common eye disorder and major cause of visual impairment worldwide. As the incidence of myopia continues to rise, the need to further understand the complex roles of molecular and environmental factors controlling variation in refractive error is of increasing importance. Tkatchenko and colleagues applied a systematic approach using a combination of gene set enrichment analysis, genome-wide association studies, and functional analysis of a murine model to identify a myopia susceptibility gene, APLP2. Differential expression of refractive error was associated with time spent reading for those with low frequency variants in this gene. This provides support for the longstanding hypothesis of gene-environment interactions in refractive error development.

  9. Systematic literature review of hospital medication administration errors in children

    Directory of Open Access Journals (Sweden)

    Ameer A

    2015-11-01

    Full Text Available Ahmed Ameer,1 Soraya Dhillon,1 Mark J Peters,2 Maisoon Ghaleb11Department of Pharmacy, School of Life and Medical Sciences, University of Hertfordshire, Hatfield, UK; 2Paediatric Intensive Care Unit, Great Ormond Street Hospital, London, UK Objective: Medication administration is the last step in the medication process. It can act as a safety net to prevent unintended harm to patients if detected. However, medication administration errors (MAEs during this process have been documented and thought to be preventable. In pediatric medicine, doses are usually administered based on the child's weight or body surface area. This in turn increases the risk of drug miscalculations and therefore MAEs. The aim of this review is to report MAEs occurring in pediatric inpatients. Methods: Twelve bibliographic databases were searched for studies published between January 2000 and February 2015 using “medication administration errors”, “hospital”, and “children” related terminologies. Handsearching of relevant publications was also carried out. A second reviewer screened articles for eligibility and quality in accordance with the inclusion/exclusion criteria. Key findings: A total of 44 studies were systematically reviewed. MAEs were generally defined as a deviation of dose given from that prescribed; this included omitted doses and administration at the wrong time. Hospital MAEs in children accounted for a mean of 50% of all reported medication error reports (n=12,588. It was also identified in a mean of 29% of doses observed (n=8,894. The most prevalent type of MAEs related to preparation, infusion rate, dose, and time. This review has identified five types of interventions to reduce hospital MAEs in children: barcode medicine administration, electronic prescribing, education, use of smart pumps, and standard concentration. Conclusion: This review has identified a wide variation in the prevalence of hospital MAEs in children. This is attributed to

  10. Identifying systematic DFT errors in catalytic reactions

    DEFF Research Database (Denmark)

    Christensen, Rune; Hansen, Heine Anton; Vegge, Tejs

    2015-01-01

    Using CO2 reduction reactions as examples, we present a widely applicable method for identifying the main source of errors in density functional theory (DFT) calculations. The method has broad applications for error correction in DFT calculations in general, as it relies on the dependence...... of the applied exchange–correlation functional on the reaction energies rather than on errors versus the experimental data. As a result, improved energy corrections can now be determined for both gas phase and adsorbed reaction species, particularly interesting within heterogeneous catalysis. We show...... that for the CO2 reduction reactions, the main source of error is associated with the C[double bond, length as m-dash]O bonds and not the typically energy corrected OCO backbone....

  11. Technical Note: Interference errors in infrared remote sounding of the atmosphere

    Directory of Open Access Journals (Sweden)

    R. Sussmann

    2007-07-01

    Full Text Available Classical error analysis in remote sounding distinguishes between four classes: "smoothing errors," "model parameter errors," "forward model errors," and "retrieval noise errors". For infrared sounding "interference errors", which, in general, cannot be described by these four terms, can be significant. Interference errors originate from spectral residuals due to "interfering species" whose spectral features overlap with the signatures of the target species. A general method for quantification of interference errors is presented, which covers all possible algorithmic implementations, i.e., fine-grid retrievals of the interfering species or coarse-grid retrievals, and cases where the interfering species are not retrieved. In classical retrieval setups interference errors can exceed smoothing errors and can vary by orders of magnitude due to state dependency. An optimum strategy is suggested which practically eliminates interference errors by systematically minimizing the regularization strength applied to joint profile retrieval of the interfering species. This leads to an interfering-species selective deweighting of the retrieval. Details of microwindow selection are no longer critical for this optimum retrieval and widened microwindows even lead to reduced overall (smoothing and interference errors. Since computational power will increase, more and more operational algorithms will be able to utilize this optimum strategy in the future. The findings of this paper can be applied to soundings of all infrared-active atmospheric species, which include more than two dozen different gases relevant to climate and ozone. This holds for all kinds of infrared remote sounding systems, i.e., retrievals from ground-based, balloon-borne, airborne, or satellite spectroradiometers.

  12. Measurement Error in Education and Growth Regressions

    NARCIS (Netherlands)

    Portela, M.; Teulings, C.N.; Alessie, R.

    The perpetual inventory method used for the construction of education data per country leads to systematic measurement error. This paper analyses the effect of this measurement error on GDP regressions. There is a systematic difference in the education level between census data and observations

  13. Measurement error in education and growth regressions

    NARCIS (Netherlands)

    Portela, Miguel; Teulings, Coen; Alessie, R.

    2004-01-01

    The perpetual inventory method used for the construction of education data per country leads to systematic measurement error. This paper analyses the effect of this measurement error on GDP regressions. There is a systematic difference in the education level between census data and observations

  14. A study of respiration-correlated cone-beam CT scans to correct target positioning errors in radiotherapy of thoracic cancer

    Energy Technology Data Exchange (ETDEWEB)

    Santoro, J. P.; McNamara, J.; Yorke, E.; Pham, H.; Rimner, A.; Rosenzweig, K. E.; Mageras, G. S. [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States); Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States); Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States)

    2012-10-15

    Purpose: There is increasingly widespread usage of cone-beam CT (CBCT) for guiding radiation treatment in advanced-stage lung tumors, but difficulties associated with daily CBCT in conventionally fractionated treatments include imaging dose to the patient, increased workload and longer treatment times. Respiration-correlated cone-beam CT (RC-CBCT) can improve localization accuracy in mobile lung tumors, but further increases the time and workload for conventionally fractionated treatments. This study investigates whether RC-CBCT-guided correction of systematic tumor deviations in standard fractionated lung tumor radiation treatments is more effective than 2D image-based correction of skeletal deviations alone. A second study goal compares respiration-correlated vs respiration-averaged images for determining tumor deviations. Methods: Eleven stage II-IV nonsmall cell lung cancer patients are enrolled in an IRB-approved prospective off-line protocol using RC-CBCT guidance to correct for systematic errors in GTV position. Patients receive a respiration-correlated planning CT (RCCT) at simulation, daily kilovoltage RC-CBCT scans during the first week of treatment and weekly scans thereafter. Four types of correction methods are compared: (1) systematic error in gross tumor volume (GTV) position, (2) systematic error in skeletal anatomy, (3) daily skeletal corrections, and (4) weekly skeletal corrections. The comparison is in terms of weighted average of the residual GTV deviations measured from the RC-CBCT scans and representing the estimated residual deviation over the treatment course. In the second study goal, GTV deviations computed from matching RCCT and RC-CBCT are compared to deviations computed from matching respiration-averaged images consisting of a CBCT reconstructed using all projections and an average-intensity-projection CT computed from the RCCT. Results: Of the eleven patients in the GTV-based systematic correction protocol, two required no correction

  15. A study of respiration-correlated cone-beam CT scans to correct target positioning errors in radiotherapy of thoracic cancer

    International Nuclear Information System (INIS)

    Santoro, J. P.; McNamara, J.; Yorke, E.; Pham, H.; Rimner, A.; Rosenzweig, K. E.; Mageras, G. S.

    2012-01-01

    Purpose: There is increasingly widespread usage of cone-beam CT (CBCT) for guiding radiation treatment in advanced-stage lung tumors, but difficulties associated with daily CBCT in conventionally fractionated treatments include imaging dose to the patient, increased workload and longer treatment times. Respiration-correlated cone-beam CT (RC-CBCT) can improve localization accuracy in mobile lung tumors, but further increases the time and workload for conventionally fractionated treatments. This study investigates whether RC-CBCT-guided correction of systematic tumor deviations in standard fractionated lung tumor radiation treatments is more effective than 2D image-based correction of skeletal deviations alone. A second study goal compares respiration-correlated vs respiration-averaged images for determining tumor deviations. Methods: Eleven stage II–IV nonsmall cell lung cancer patients are enrolled in an IRB-approved prospective off-line protocol using RC-CBCT guidance to correct for systematic errors in GTV position. Patients receive a respiration-correlated planning CT (RCCT) at simulation, daily kilovoltage RC-CBCT scans during the first week of treatment and weekly scans thereafter. Four types of correction methods are compared: (1) systematic error in gross tumor volume (GTV) position, (2) systematic error in skeletal anatomy, (3) daily skeletal corrections, and (4) weekly skeletal corrections. The comparison is in terms of weighted average of the residual GTV deviations measured from the RC-CBCT scans and representing the estimated residual deviation over the treatment course. In the second study goal, GTV deviations computed from matching RCCT and RC-CBCT are compared to deviations computed from matching respiration-averaged images consisting of a CBCT reconstructed using all projections and an average-intensity-projection CT computed from the RCCT. Results: Of the eleven patients in the GTV-based systematic correction protocol, two required no correction

  16. Error Mitigation in Computational Design of Sustainable Energy Materials

    DEFF Research Database (Denmark)

    Christensen, Rune

    by individual C=O bonds. Energy corrections applied to C=O bonds significantly reduce systematic errors and can be extended to adsorbates. A similar study is performed for intermediates in the oxygen evolution and oxygen reduction reactions. An identified systematic error on peroxide bonds is found to also...... be present in the OOH* adsorbate. However, the systematic error will almost be canceled by inclusion of van der Waals energy. The energy difference between key adsorbates is thus similar to that previously found. Finally, a method is developed for error estimation in computationally inexpensive neural...

  17. Coping with medical error: a systematic review of papers to assess the effects of involvement in medical errors on healthcare professionals' psychological well-being.

    Science.gov (United States)

    Sirriyeh, Reema; Lawton, Rebecca; Gardner, Peter; Armitage, Gerry

    2010-12-01

    Previous research has established health professionals as secondary victims of medical error, with the identification of a range of emotional and psychological repercussions that may occur as a result of involvement in error.2 3 Due to the vast range of emotional and psychological outcomes, research to date has been inconsistent in the variables measured and tools used. Therefore, differing conclusions have been drawn as to the nature of the impact of error on professionals and the subsequent repercussions for their team, patients and healthcare institution. A systematic review was conducted. Data sources were identified using database searches, with additional reference and hand searching. Eligibility criteria were applied to all studies identified, resulting in a total of 24 included studies. Quality assessment was conducted with the included studies using a tool that was developed as part of this research, but due to the limited number and diverse nature of studies, no exclusions were made on this basis. Review findings suggest that there is consistent evidence for the widespread impact of medical error on health professionals. Psychological repercussions may include negative states such as shame, self-doubt, anxiety and guilt. Despite much attention devoted to the assessment of negative outcomes, the potential for positive outcomes resulting from error also became apparent, with increased assertiveness, confidence and improved colleague relationships reported. It is evident that involvement in a medical error can elicit a significant psychological response from the health professional involved. However, a lack of literature around coping and support, coupled with inconsistencies and weaknesses in methodology, may need be addressed in future work.

  18. Economic impact of medication error: a systematic review.

    Science.gov (United States)

    Walsh, Elaine K; Hansen, Christina Raae; Sahm, Laura J; Kearney, Patricia M; Doherty, Edel; Bradley, Colin P

    2017-05-01

    Medication error is a significant source of morbidity and mortality among patients. Clinical and cost-effectiveness evidence are required for the implementation of quality of care interventions. Reduction of error-related cost is a key potential benefit of interventions addressing medication error. The aim of this review was to describe and quantify the economic burden associated with medication error. PubMed, Cochrane, Embase, CINAHL, EconLit, ABI/INFORM, Business Source Complete were searched. Studies published 2004-2016 assessing the economic impact of medication error were included. Cost values were expressed in Euro 2015. A narrative synthesis was performed. A total of 4572 articles were identified from database searching, and 16 were included in the review. One study met all applicable quality criteria. Fifteen studies expressed economic impact in monetary terms. Mean cost per error per study ranged from €2.58 to €111 727.08. Healthcare costs were used to measure economic impact in 15 of the included studies with one study measuring litigation costs. Four studies included costs incurred in primary care with the remaining 12 measuring hospital costs. Five studies looked at general medication error in a general population with 11 studies reporting the economic impact of an individual type of medication error or error within a specific patient population. Considerable variability existed between studies in terms of financial cost, patients, settings and errors included. Many were of poor quality. Assessment of economic impact was conducted predominantly in the hospital setting with little assessment of primary care impact. Limited parameters were used to establish economic impact. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Effects of Target Positioning Error on Motion Compensation for Airborne Interferometric SAR

    Directory of Open Access Journals (Sweden)

    Li Yin-wei

    2013-12-01

    Full Text Available The measurement inaccuracies of Inertial Measurement Unit/Global Positioning System (IMU/GPS as well as the positioning error of the target may contribute to the residual uncompensated motion errors in the MOtion COmpensation (MOCO approach based on the measurement of IMU/GPS. Aiming at the effects of target positioning error on MOCO for airborne interferometric SAR, the paper firstly deduces a mathematical model of residual motion error bring out by target positioning error under the condition of squint. And the paper analyzes the effects on the residual motion error caused by system sampling delay error, the Doppler center frequency error and reference DEM error which result in target positioning error based on the model. Then, the paper discusses the effects of the reference DEM error on the interferometric SAR image quality, the interferometric phase and the coherent coefficient. The research provides theoretical bases for the MOCO precision in signal processing of airborne high precision SAR and airborne repeat-pass interferometric SAR.

  20. IceCube systematic errors investigation: Simulation of the ice

    Energy Technology Data Exchange (ETDEWEB)

    Resconi, Elisa; Wolf, Martin [Max-Planck-Institute for Nuclear Physics, Heidelberg (Germany); Schukraft, Anne [RWTH, Aachen University (Germany)

    2010-07-01

    IceCube is a neutrino observatory for astroparticle and astronomy research at the South Pole. It uses one cubic kilometer of Antartica's deepest ice (1500 m-2500 m in depth) to detect Cherenkov light, generated by charged particles traveling through the ice, with an array of phototubes encapsulated in glass pressure spheres. The arrival time as well as the charge deposited of the detected photons represent the base measurements that are used for track and energy reconstruction of those charged particles. The optical properties of the deep antarctic ice vary from layer to layer. Measurements of the ice properties and their correct modeling in Monte Carlo simulation is then of primary importance for the correct understanding of the IceCube telescope behavior. After a short summary about the different methods to investigate the ice properties and to calibrate the detector, we show how the simulation obtained by using this information compares to the measured data and how systematic errors due to uncertain ice properties are determined in IceCube.

  1. Prevalence and spectrum of residual symptoms in Lyme neuroborreliosis after pharmacological treatment: a systematic review.

    Science.gov (United States)

    Dersch, R; Sommer, H; Rauer, S; Meerpohl, J J

    2016-01-01

    Controversy exists about residual symptoms after pharmacological treatment of Lyme neuroborreliosis. Reports of disabling long-term sequels lead to concerns in patients and health care providers. We systematically reviewed the available evidence from studies reporting treatment of Lyme neuroborreliosis to assess the prevalence and spectrum of residual symptoms after treatment. A literature search was performed in three databases and three clinical trial registers to find eligible studies reporting on residual symptoms in patients after pharmacological treatment of LNB. Diagnosis must have been performed according to consensus-derived case definitions. No restrictions regarding study design or language were set. Symptom prevalence was pooled using a random-effects model. Forty-four eligible clinical trials and studies were found: 8 RCTs, 17 cohort studies, 2 case-control studies, and 17 case series. The follow-up period in the eligible studies ranged from 7 days to 20 years. The weighted mean proportion of residual symptoms was 28 % (95 % CI 23-34 %, n = 34 studies) for the latest reported time point. Prevalence of residual symptoms was statistically significantly higher in studies using the "possible" case definition (p = 0.0048). Cranial neuropathy, pain, paresis, cognitive disturbances, headache, and fatigue were statistically significantly lower in studies using the "probable/definite" case definition. LNB patients may experience residual symptoms after treatment with a prevalence of approximately 28 %. The prevalence and spectrum of residual symptoms differ according to the applied case definition. Symptoms like fatigue are not reported in studies using the "probable/definite" case definition. As the "possible" case definition is more unspecific, patients with other conditions may be included. Reports of debilitating fatigue and cognitive impairment after LNB, a "post-Lyme syndrome", could therefore be an artifact of unspecific case definitions in single

  2. [Errors in Peruvian medical journals references].

    Science.gov (United States)

    Huamaní, Charles; Pacheco-Romero, José

    2009-01-01

    References are fundamental in our studies; an adequate selection is asimportant as an adequate description. To determine the number of errors in a sample of references found in Peruvian medical journals. We reviewed 515 scientific papers references selected by systematic randomized sampling and corroborated reference information with the original document or its citation in Pubmed, LILACS or SciELO-Peru. We found errors in 47,6% (245) of the references, identifying 372 types of errors; the most frequent were errors in presentation style (120), authorship (100) and title (100), mainly due to spelling mistakes (91). References error percentage was high, varied and multiple. We suggest systematic revision of references in the editorial process as well as to extend the discussion on this theme. references, periodicals, research, bibliometrics.

  3. Uncertainty quantification and error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  4. ILRS Activities in Monitoring Systematic Errors in SLR Data

    Science.gov (United States)

    Pavlis, E. C.; Luceri, V.; Kuzmicz-Cieslak, M.; Bianco, G.

    2017-12-01

    The International Laser Ranging Service (ILRS) contributes to ITRF development unique information that only Satellite Laser Ranging—SLR is sensitive to: the definition of the origin, and in equal parts with VLBI, the scale of the model. For the development of ITRF2014, the ILRS analysts adopted a revision of the internal standards and procedures in generating our contribution from the eight ILRS Analysis Centers. The improved results for the ILRS components were reflected in the resulting new time series of the ITRF origin and scale, showing insignificant trends and tighter scatter. This effort was further extended after the release of ITRF2014, with the execution of a Pilot Project (PP) in the 2016-2017 timeframe that demonstrated the robust estimation of persistent systematic errors at the millimeter level. ILRS ASC is now turning this into an operational tool to monitor station performance and to generate a history of systematics at each station, to be used with each re-analysis for future ITRF model developments. This is part of a broader ILRS effort to improve the quality control of the data collection process as well as that of our products. To this end, the ILRS has established a "Quality Control Board—QCB" that comprises of members from the analysis and engineering groups, the Central Bureau, and even user groups with special interests. The QCB meets by telecon monthly and oversees the various ongoing projects, develops ideas for new tools and future products. This presentation will focus on the main topic with an update on the results so far, the schedule for the near future and its operational implementation, along with a brief description of upcoming new ILRS products.

  5. Managing Systematic Errors in a Polarimeter for the Storage Ring EDM Experiment

    Science.gov (United States)

    Stephenson, Edward J.; Storage Ring EDM Collaboration

    2011-05-01

    The EDDA plastic scintillator detector system at the Cooler Synchrotron (COSY) has been used to demonstrate that it is possible using a thick target at the edge of the circulating beam to meet the requirements for a polarimeter to be used in the search for an electric dipole moment on the proton or deuteron. Emphasizing elastic and low Q-value reactions leads to large analyzing powers and, along with thick targets, to efficiencies near 1%. Using only information obtained comparing count rates for oppositely vector-polarized beam states and a calibration of the sensitivity of the polarimeter to rate and geometric changes, the contribution of systematic errors can be suppressed below the level of one part per million.

  6. Effects of Systematic and Random Errors on the Retrieval of Particle Microphysical Properties from Multiwavelength Lidar Measurements Using Inversion with Regularization

    Science.gov (United States)

    Ramirez, Daniel Perez; Whiteman, David N.; Veselovskii, Igor; Kolgotin, Alexei; Korenskiy, Michael; Alados-Arboledas, Lucas

    2013-01-01

    In this work we study the effects of systematic and random errors on the inversion of multiwavelength (MW) lidar data using the well-known regularization technique to obtain vertically resolved aerosol microphysical properties. The software implementation used here was developed at the Physics Instrumentation Center (PIC) in Troitsk (Russia) in conjunction with the NASA/Goddard Space Flight Center. Its applicability to Raman lidar systems based on backscattering measurements at three wavelengths (355, 532 and 1064 nm) and extinction measurements at two wavelengths (355 and 532 nm) has been demonstrated widely. The systematic error sensitivity is quantified by first determining the retrieved parameters for a given set of optical input data consistent with three different sets of aerosol physical parameters. Then each optical input is perturbed by varying amounts and the inversion is repeated. Using bimodal aerosol size distributions, we find a generally linear dependence of the retrieved errors in the microphysical properties on the induced systematic errors in the optical data. For the retrievals of effective radius, number/surface/volume concentrations and fine-mode radius and volume, we find that these results are not significantly affected by the range of the constraints used in inversions. But significant sensitivity was found to the allowed range of the imaginary part of the particle refractive index. Our results also indicate that there exists an additive property for the deviations induced by the biases present in the individual optical data. This property permits the results here to be used to predict deviations in retrieved parameters when multiple input optical data are biased simultaneously as well as to study the influence of random errors on the retrievals. The above results are applied to questions regarding lidar design, in particular for the spaceborne multiwavelength lidar under consideration for the upcoming ACE mission.

  7. Edge profile analysis of Joint European Torus (JET) Thomson scattering data: Quantifying the systematic error due to edge localised mode synchronisation.

    Science.gov (United States)

    Leyland, M J; Beurskens, M N A; Flanagan, J C; Frassinetti, L; Gibson, K J; Kempenaars, M; Maslov, M; Scannell, R

    2016-01-01

    The Joint European Torus (JET) high resolution Thomson scattering (HRTS) system measures radial electron temperature and density profiles. One of the key capabilities of this diagnostic is measuring the steep pressure gradient, termed the pedestal, at the edge of JET plasmas. The pedestal is susceptible to limiting instabilities, such as Edge Localised Modes (ELMs), characterised by a periodic collapse of the steep gradient region. A common method to extract the pedestal width, gradient, and height, used on numerous machines, is by performing a modified hyperbolic tangent (mtanh) fit to overlaid profiles selected from the same region of the ELM cycle. This process of overlaying profiles, termed ELM synchronisation, maximises the number of data points defining the pedestal region for a given phase of the ELM cycle. When fitting to HRTS profiles, it is necessary to incorporate the diagnostic radial instrument function, particularly important when considering the pedestal width. A deconvolved fit is determined by a forward convolution method requiring knowledge of only the instrument function and profiles. The systematic error due to the deconvolution technique incorporated into the JET pedestal fitting tool has been documented by Frassinetti et al. [Rev. Sci. Instrum. 83, 013506 (2012)]. This paper seeks to understand and quantify the systematic error introduced to the pedestal width due to ELM synchronisation. Synthetic profiles, generated with error bars and point-to-point variation characteristic of real HRTS profiles, are used to evaluate the deviation from the underlying pedestal width. We find on JET that the ELM synchronisation systematic error is negligible in comparison to the statistical error when assuming ten overlaid profiles (typical for a pre-ELM fit to HRTS profiles). This confirms that fitting a mtanh to ELM synchronised profiles is a robust and practical technique for extracting the pedestal structure.

  8. Residual stress measurement in a metal microdevice by micro Raman spectroscopy

    International Nuclear Information System (INIS)

    Song, Chang; Du, Liqun; Qi, Leijie; Li, Yu; Li, Xiaojun; Li, Yuanqi

    2017-01-01

    Large residual stress induced during the electroforming process cannot be ignored to fabricate reliable metal microdevices. Accurate measurement is the basis for studying the residual stress. Influenced by the topological feature size of micron scale in the metal microdevice, residual stress in it can hardly be measured by common methods. In this manuscript, a methodology is proposed to measure the residual stress in the metal microdevice using micro Raman spectroscopy (MRS). To estimate the residual stress in metal materials, micron sized β -SiC particles were mixed in the electroforming solution for codeposition. First, the calculated expression relating the Raman shifts to the induced biaxial stress for β -SiC was derived based on the theory of phonon deformation potentials and Hooke’s law. Corresponding micro electroforming experiments were performed and the residual stress in Ni–SiC composite layer was both measured by x-ray diffraction (XRD) and MRS methods. Then, the validity of the MRS measurements was verified by comparing with the residual stress measured by XRD method. The reliability of the MRS method was further validated by the statistical student’s t -test. The MRS measurements were found to have no systematic error in comparison with the XRD measurements, which confirm that the residual stresses measured by the MRS method are reliable. Besides that, the MRS method, by which the residual stress in a micro inertial switch was measured, has been confirmed to be a convincing experiment tool for estimating the residual stress in metal microdevice with micron order topological feature size. (paper)

  9. Residual stress measurement in a metal microdevice by micro Raman spectroscopy

    Science.gov (United States)

    Song, Chang; Du, Liqun; Qi, Leijie; Li, Yu; Li, Xiaojun; Li, Yuanqi

    2017-10-01

    Large residual stress induced during the electroforming process cannot be ignored to fabricate reliable metal microdevices. Accurate measurement is the basis for studying the residual stress. Influenced by the topological feature size of micron scale in the metal microdevice, residual stress in it can hardly be measured by common methods. In this manuscript, a methodology is proposed to measure the residual stress in the metal microdevice using micro Raman spectroscopy (MRS). To estimate the residual stress in metal materials, micron sized β-SiC particles were mixed in the electroforming solution for codeposition. First, the calculated expression relating the Raman shifts to the induced biaxial stress for β-SiC was derived based on the theory of phonon deformation potentials and Hooke’s law. Corresponding micro electroforming experiments were performed and the residual stress in Ni-SiC composite layer was both measured by x-ray diffraction (XRD) and MRS methods. Then, the validity of the MRS measurements was verified by comparing with the residual stress measured by XRD method. The reliability of the MRS method was further validated by the statistical student’s t-test. The MRS measurements were found to have no systematic error in comparison with the XRD measurements, which confirm that the residual stresses measured by the MRS method are reliable. Besides that, the MRS method, by which the residual stress in a micro inertial switch was measured, has been confirmed to be a convincing experiment tool for estimating the residual stress in metal microdevice with micron order topological feature size.

  10. Pharyngeal Residue Severity Rating Scales Based on Fiberoptic Endoscopic Evaluation of Swallowing: A Systematic Review.

    Science.gov (United States)

    Neubauer, Paul D; Hersey, Denise P; Leder, Steven B

    2016-06-01

    Identification of pharyngeal residue severity located in the valleculae and pyriform sinuses has always been a primary goal during fiberoptic endoscopic evaluation of swallowing (FEES). Pharyngeal residue is a clinical sign of potential prandial aspiration making an accurate description of its severity an important but difficult challenge. A reliable, validated, and generalizable pharyngeal residue severity rating scale for FEES would be beneficial. A systematic review of the published English language literature since 1995 was conducted to determine the quality of existing pharyngeal residue severity rating scales based on FEES. Databases were searched using controlled vocabulary words and synonymous free text words for topics of interest (deglutition disorders, pharyngeal residue, endoscopy, videofluoroscopy, fiberoptic technology, aspiration, etc.) and outcomes of interest (scores, scales, grades, tests, FEES, etc.). Search strategies were adjusted for syntax appropriate for each database/platform. Data sources included MEDLINE (OvidSP 1946-April Week 3 2015), Embase (OvidSP 1974-2015 April 20), Scopus (Elsevier), and the unindexed material in PubMed (NLM/NIH) were searched for relevant articles. Supplementary efforts to identify studies included checking reference lists of articles retrieved. Scales were compared using qualitative properties (sample size, severity definitions, number of raters, and raters' experience and training) and psychometric analyses (randomization, intra- and inter-rater reliability, and construct validity). Seven articles describing pharyngeal residue severity rating scales met inclusion criteria. Six of seven scales had insufficient data to support their use as evidenced by methodological weaknesses with both qualitative properties and psychometric analyses. There is a need for qualitative and psychometrically reliable, validated, and generalizable pharyngeal residue severity rating scales that are anatomically specific, image

  11. Error budget calculations in laboratory medicine: linking the concepts of biological variation and allowable medical errors

    NARCIS (Netherlands)

    Stroobants, A. K.; Goldschmidt, H. M. J.; Plebani, M.

    2003-01-01

    Background: Random, systematic and sporadic errors, which unfortunately are not uncommon in laboratory medicine, can have a considerable impact on the well being of patients. Although somewhat difficult to attain, our main goal should be to prevent all possible errors. A good insight on error-prone

  12. Human-simulation-based learning to prevent medication error: A systematic review.

    Science.gov (United States)

    Sarfati, Laura; Ranchon, Florence; Vantard, Nicolas; Schwiertz, Vérane; Larbre, Virginie; Parat, Stéphanie; Faudel, Amélie; Rioufol, Catherine

    2018-01-31

    In the past 2 decades, there has been an increasing interest in simulation-based learning programs to prevent medication error (ME). To improve knowledge, skills, and attitudes in prescribers, nurses, and pharmaceutical staff, these methods enable training without directly involving patients. However, best practices for simulation for healthcare providers are as yet undefined. By analysing the current state of experience in the field, the present review aims to assess whether human simulation in healthcare helps to reduce ME. A systematic review was conducted on Medline from 2000 to June 2015, associating the terms "Patient Simulation," "Medication Errors," and "Simulation Healthcare." Reports of technology-based simulation were excluded, to focus exclusively on human simulation in nontechnical skills learning. Twenty-one studies assessing simulation-based learning programs were selected, focusing on pharmacy, medicine or nursing students, or concerning programs aimed at reducing administration or preparation errors, managing crises, or learning communication skills for healthcare professionals. The studies varied in design, methodology, and assessment criteria. Few demonstrated that simulation was more effective than didactic learning in reducing ME. This review highlights a lack of long-term assessment and real-life extrapolation, with limited scenarios and participant samples. These various experiences, however, help in identifying the key elements required for an effective human simulation-based learning program for ME prevention: ie, scenario design, debriefing, and perception assessment. The performance of these programs depends on their ability to reflect reality and on professional guidance. Properly regulated simulation is a good way to train staff in events that happen only exceptionally, as well as in standard daily activities. By integrating human factors, simulation seems to be effective in preventing iatrogenic risk related to ME, if the program is

  13. Simulating systematic errors in X-ray absorption spectroscopy experiments: Sample and beam effects

    Energy Technology Data Exchange (ETDEWEB)

    Curis, Emmanuel [Laboratoire de Biomathematiques, Faculte de Pharmacie, Universite Rene, Descartes (Paris V)-4, Avenue de l' Observatoire, 75006 Paris (France)]. E-mail: emmanuel.curis@univ-paris5.fr; Osan, Janos [KFKI Atomic Energy Research Institute (AEKI)-P.O. Box 49, H-1525 Budapest (Hungary); Falkenberg, Gerald [Hamburger Synchrotronstrahlungslabor (HASYLAB), Deutsches Elektronen-Synchrotron (DESY)-Notkestrasse 85, 22607 Hamburg (Germany); Benazeth, Simone [Laboratoire de Biomathematiques, Faculte de Pharmacie, Universite Rene, Descartes (Paris V)-4, Avenue de l' Observatoire, 75006 Paris (France); Laboratoire d' Utilisation du Rayonnement Electromagnetique (LURE)-Ba-hat timent 209D, Campus d' Orsay, 91406 Orsay (France); Toeroek, Szabina [KFKI Atomic Energy Research Institute (AEKI)-P.O. Box 49, H-1525 Budapest (Hungary)

    2005-07-15

    The article presents an analytical model to simulate experimental imperfections in the realization of an X-ray absorption spectroscopy experiment, performed in transmission or fluorescence mode. Distinction is made between sources of systematic errors on a time-scale basis, to select the more appropriate model for their handling. For short time-scale, statistical models are the most suited. For large time-scale, the model is developed for sample and beam imperfections: mainly sample inhomogeneity, sample self-absorption, beam achromaticity. The ability of this model to reproduce the effects of these imperfections is exemplified, and the model is validated on real samples. Various potential application fields of the model are then presented.

  14. Simulating systematic errors in X-ray absorption spectroscopy experiments: Sample and beam effects

    International Nuclear Information System (INIS)

    Curis, Emmanuel; Osan, Janos; Falkenberg, Gerald; Benazeth, Simone; Toeroek, Szabina

    2005-01-01

    The article presents an analytical model to simulate experimental imperfections in the realization of an X-ray absorption spectroscopy experiment, performed in transmission or fluorescence mode. Distinction is made between sources of systematic errors on a time-scale basis, to select the more appropriate model for their handling. For short time-scale, statistical models are the most suited. For large time-scale, the model is developed for sample and beam imperfections: mainly sample inhomogeneity, sample self-absorption, beam achromaticity. The ability of this model to reproduce the effects of these imperfections is exemplified, and the model is validated on real samples. Various potential application fields of the model are then presented

  15. Prevalence and reporting of recruitment, randomisation and treatment errors in clinical trials: A systematic review.

    Science.gov (United States)

    Yelland, Lisa N; Kahan, Brennan C; Dent, Elsa; Lee, Katherine J; Voysey, Merryn; Forbes, Andrew B; Cook, Jonathan A

    2018-06-01

    Background/aims In clinical trials, it is not unusual for errors to occur during the process of recruiting, randomising and providing treatment to participants. For example, an ineligible participant may inadvertently be randomised, a participant may be randomised in the incorrect stratum, a participant may be randomised multiple times when only a single randomisation is permitted or the incorrect treatment may inadvertently be issued to a participant at randomisation. Such errors have the potential to introduce bias into treatment effect estimates and affect the validity of the trial, yet there is little motivation for researchers to report these errors and it is unclear how often they occur. The aim of this study is to assess the prevalence of recruitment, randomisation and treatment errors and review current approaches for reporting these errors in trials published in leading medical journals. Methods We conducted a systematic review of individually randomised, phase III, randomised controlled trials published in New England Journal of Medicine, Lancet, Journal of the American Medical Association, Annals of Internal Medicine and British Medical Journal from January to March 2015. The number and type of recruitment, randomisation and treatment errors that were reported and how they were handled were recorded. The corresponding authors were contacted for a random sample of trials included in the review and asked to provide details on unreported errors that occurred during their trial. Results We identified 241 potentially eligible articles, of which 82 met the inclusion criteria and were included in the review. These trials involved a median of 24 centres and 650 participants, and 87% involved two treatment arms. Recruitment, randomisation or treatment errors were reported in 32 in 82 trials (39%) that had a median of eight errors. The most commonly reported error was ineligible participants inadvertently being randomised. No mention of recruitment, randomisation

  16. Exploring cosmic origins with CORE: Mitigation of systematic effects

    Science.gov (United States)

    Natoli, P.; Ashdown, M.; Banerji, R.; Borrill, J.; Buzzelli, A.; de Gasperis, G.; Delabrouille, J.; Hivon, E.; Molinari, D.; Patanchon, G.; Polastri, L.; Tomasi, M.; Bouchet, F. R.; Henrot-Versillé, S.; Hoang, D. T.; Keskitalo, R.; Kiiveri, K.; Kisner, T.; Lindholm, V.; McCarthy, D.; Piacentini, F.; Perdereau, O.; Polenta, G.; Tristram, M.; Achucarro, A.; Ade, P.; Allison, R.; Baccigalupi, C.; Ballardini, M.; Banday, A. J.; Bartlett, J.; Bartolo, N.; Basak, S.; Baumann, D.; Bersanelli, M.; Bonaldi, A.; Bonato, M.; Boulanger, F.; Brinckmann, T.; Bucher, M.; Burigana, C.; Cai, Z.-Y.; Calvo, M.; Carvalho, C.-S.; Castellano, M. G.; Challinor, A.; Chluba, J.; Clesse, S.; Colantoni, I.; Coppolecchia, A.; Crook, M.; D'Alessandro, G.; de Bernardis, P.; De Zotti, G.; Di Valentino, E.; Diego, J.-M.; Errard, J.; Feeney, S.; Fernandez-Cobos, R.; Finelli, F.; Forastieri, F.; Galli, S.; Genova-Santos, R.; Gerbino, M.; González-Nuevo, J.; Grandis, S.; Greenslade, J.; Gruppuso, A.; Hagstotz, S.; Hanany, S.; Handley, W.; Hernandez-Monteagudo, C.; Hervías-Caimapo, C.; Hills, M.; Keihänen, E.; Kitching, T.; Kunz, M.; Kurki-Suonio, H.; Lamagna, L.; Lasenby, A.; Lattanzi, M.; Lesgourgues, J.; Lewis, A.; Liguori, M.; López-Caniego, M.; Luzzi, G.; Maffei, B.; Mandolesi, N.; Martinez-González, E.; Martins, C. J. A. P.; Masi, S.; Matarrese, S.; Melchiorri, A.; Melin, J.-B.; Migliaccio, M.; Monfardini, A.; Negrello, M.; Notari, A.; Pagano, L.; Paiella, A.; Paoletti, D.; Piat, M.; Pisano, G.; Pollo, A.; Poulin, V.; Quartin, M.; Remazeilles, M.; Roman, M.; Rossi, G.; Rubino-Martin, J.-A.; Salvati, L.; Signorelli, G.; Tartari, A.; Tramonte, D.; Trappe, N.; Trombetti, T.; Tucker, C.; Valiviita, J.; Van de Weijgaert, R.; van Tent, B.; Vennin, V.; Vielva, P.; Vittorio, N.; Wallis, C.; Young, K.; Zannoni, M.

    2018-04-01

    We present an analysis of the main systematic effects that could impact the measurement of CMB polarization with the proposed CORE space mission. We employ timeline-to-map simulations to verify that the CORE instrumental set-up and scanning strategy allow us to measure sky polarization to a level of accuracy adequate to the mission science goals. We also show how the CORE observations can be processed to mitigate the level of contamination by potentially worrying systematics, including intensity-to-polarization leakage due to bandpass mismatch, asymmetric main beams, pointing errors and correlated noise. We use analysis techniques that are well validated on data from current missions such as Planck to demonstrate how the residual contamination of the measurements by these effects can be brought to a level low enough not to hamper the scientific capability of the mission, nor significantly increase the overall error budget. We also present a prototype of the CORE photometric calibration pipeline, based on that used for Planck, and discuss its robustness to systematics, showing how CORE can achieve its calibration requirements. While a fine-grained assessment of the impact of systematics requires a level of knowledge of the system that can only be achieved in a future study phase, the analysis presented here strongly suggests that the main areas of concern for the CORE mission can be addressed using existing knowledge, techniques and algorithms.

  17. Galaxy Cluster Shapes and Systematic Errors in the Hubble Constant as Determined by the Sunyaev-Zel'dovich Effect

    Science.gov (United States)

    Sulkanen, Martin E.; Joy, M. K.; Patel, S. K.

    1998-01-01

    Imaging of the Sunyaev-Zei'dovich (S-Z) effect in galaxy clusters combined with the cluster plasma x-ray diagnostics can measure the cosmic distance scale to high accuracy. However, projecting the inverse-Compton scattering and x-ray emission along the cluster line-of-sight will introduce systematic errors in the Hubble constant, H$-O$, because the true shape of the cluster is not known. This effect remains present for clusters that are otherwise chosen to avoid complications for the S-Z and x-ray analysis, such as plasma temperature variations, cluster substructure, or cluster dynamical evolution. In this paper we present a study of the systematic errors in the value of H$-0$, as determined by the x-ray and S-Z properties of a theoretical sample of triaxial isothermal 'beta-model' clusters, caused by projection effects and observer orientation relative to the model clusters' principal axes. The model clusters are not generated as ellipsoids of rotation, but have three independent 'core radii', as well as a random orientation to the plane of the sky.

  18. Systematic error in the precision measurement of the mean wavelength of a nearly monochromatic neutron beam due to geometric errors

    Energy Technology Data Exchange (ETDEWEB)

    Coakley, K.J., E-mail: kevin.coakley@nist.go [National Institute of Standards and Technology, 325 Broadway, Boulder, CO 80305 (United States); Dewey, M.S. [National Institute of Standards and Technology, Gaithersburg, MD (United States); Yue, A.T. [University of Tennessee, Knoxville, TN (United States); Laptev, A.B. [Tulane University, New Orleans, LA (United States)

    2009-12-11

    Many experiments at neutron scattering facilities require nearly monochromatic neutron beams. In such experiments, one must accurately measure the mean wavelength of the beam. We seek to reduce the systematic uncertainty of this measurement to approximately 0.1%. This work is motivated mainly by an effort to improve the measurement of the neutron lifetime determined from data collected in a 2003 in-beam experiment performed at NIST. More specifically, we seek to reduce systematic uncertainty by calibrating the neutron detector used in this lifetime experiment. This calibration requires simultaneous measurement of the responses of both the neutron detector used in the lifetime experiment and an absolute black neutron detector to a highly collimated nearly monochromatic beam of cold neutrons, as well as a separate measurement of the mean wavelength of the neutron beam. The calibration uncertainty will depend on the uncertainty of the measured efficiency of the black neutron detector and the uncertainty of the measured mean wavelength. The mean wavelength of the beam is measured by Bragg diffracting the beam from a nearly perfect silicon analyzer crystal. Given the rocking curve data and knowledge of the directions of the rocking axis and the normal to the scattering planes in the silicon crystal, one determines the mean wavelength of the beam. In practice, the direction of the rocking axis and the normal to the silicon scattering planes are not known exactly. Based on Monte Carlo simulation studies, we quantify systematic uncertainties in the mean wavelength measurement due to these geometric errors. Both theoretical and empirical results are presented and compared.

  19. Image pre-filtering for measurement error reduction in digital image correlation

    Science.gov (United States)

    Zhou, Yihao; Sun, Chen; Song, Yuntao; Chen, Jubing

    2015-02-01

    In digital image correlation, the sub-pixel intensity interpolation causes a systematic error in the measured displacements. The error increases toward high-frequency component of the speckle pattern. In practice, a captured image is usually corrupted by additive white noise. The noise introduces additional energy in the high frequencies and therefore raises the systematic error. Meanwhile, the noise also elevates the random error which increases with the noise power. In order to reduce the systematic error and the random error of the measurements, we apply a pre-filtering to the images prior to the correlation so that the high-frequency contents are suppressed. Two spatial-domain filters (binomial and Gaussian) and two frequency-domain filters (Butterworth and Wiener) are tested on speckle images undergoing both simulated and real-world translations. By evaluating the errors of the various combinations of speckle patterns, interpolators, noise levels, and filter configurations, we come to the following conclusions. All the four filters are able to reduce the systematic error. Meanwhile, the random error can also be reduced if the signal power is mainly distributed around DC. For high-frequency speckle patterns, the low-pass filters (binomial, Gaussian and Butterworth) slightly increase the random error and Butterworth filter produces the lowest random error among them. By using Wiener filter with over-estimated noise power, the random error can be reduced but the resultant systematic error is higher than that of low-pass filters. In general, Butterworth filter is recommended for error reduction due to its flexibility of passband selection and maximal preservation of the allowed frequencies. Binomial filter enables efficient implementation and thus becomes a good option if computational cost is a critical issue. While used together with pre-filtering, B-spline interpolator produces lower systematic error than bicubic interpolator and similar level of the random

  20. Systematic errors in the readings of track etch neutron dosemeters caused by the energy dependence of response

    International Nuclear Information System (INIS)

    Tanner, R.J.; Thomas, D.J.; Bartlett, D.T.; Horwood, N.

    1999-01-01

    A study has been performed to assess the extent to which variations in the energy dependence of response of neutron personal dosemeters can cause systematic errors in readings obtained in workplace fields. This involved a detailed determination of the response functions of personal dosemeters used in the UK. These response functions were folded with workplace spectra to ascertain the under- or over-response in workplace fields

  1. Systematic errors in the readings of track etch neutron dosemeters caused by the energy dependence of response

    CERN Document Server

    Tanner, R J; Bartlett, D T; Horwood, N

    1999-01-01

    A study has been performed to assess the extent to which variations in the energy dependence of response of neutron personal dosemeters can cause systematic errors in readings obtained in workplace fields. This involved a detailed determination of the response functions of personal dosemeters used in the UK. These response functions were folded with workplace spectra to ascertain the under- or over-response in workplace fields.

  2. Evaluating and improving the representation of heteroscedastic errors in hydrological models

    Science.gov (United States)

    McInerney, D. J.; Thyer, M. A.; Kavetski, D.; Kuczera, G. A.

    2013-12-01

    Appropriate representation of residual errors in hydrological modelling is essential for accurate and reliable probabilistic predictions. In particular, residual errors of hydrological models are often heteroscedastic, with large errors associated with high rainfall and runoff events. Recent studies have shown that using a weighted least squares (WLS) approach - where the magnitude of residuals are assumed to be linearly proportional to the magnitude of the flow - captures some of this heteroscedasticity. In this study we explore a range of Bayesian approaches for improving the representation of heteroscedasticity in residual errors. We compare several improved formulations of the WLS approach, the well-known Box-Cox transformation and the more recent log-sinh transformation. Our results confirm that these approaches are able to stabilize the residual error variance, and that it is possible to improve the representation of heteroscedasticity compared with the linear WLS approach. We also find generally good performance of the Box-Cox and log-sinh transformations, although as indicated in earlier publications, the Box-Cox transform sometimes produces unrealistically large prediction limits. Our work explores the trade-offs between these different uncertainty characterization approaches, investigates how their performance varies across diverse catchments and models, and recommends practical approaches suitable for large-scale applications.

  3. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.jp; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro [Department of Radiation Oncology and Image-applied Therapy, Kyoto University, 54 Shogoin-Kawaharacho, Sakyo, Kyoto 606-8507 (Japan)

    2016-09-15

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.

  4. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    International Nuclear Information System (INIS)

    Matsuo, Yukinori; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro

    2016-01-01

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.

  5. Noncontact thermometry via laser pumped, thermographic phosphors: Characterization of systematic errors and industrial applications

    International Nuclear Information System (INIS)

    Gillies, G.T.; Dowell, L.J.; Lutz, W.N.; Allison, S.W.; Cates, M.R.; Noel, B.W.; Franks, L.A.; Borella, H.M.

    1987-10-01

    There are a growing number of industrial measurement situations that call for a high precision, noncontact method of thermometry. Our collaboration has been successful in developing one such method based on the laser-induced fluorescence of rare-earth-doped ceramic phosphors like Y 2 O 3 :Eu. In this paper, we summarize the results of characterization studies aimed at identifying the sources of systematic error in a laboratory-grade version of the method. We then go on to present data from measurements made in the afterburner plume of a jet turbine and inside an operating permanent magnet motor. 12 refs., 6 figs

  6. Learning from Errors

    Directory of Open Access Journals (Sweden)

    MA. Lendita Kryeziu

    2015-06-01

    Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.

  7. Benefits and risks of using smart pumps to reduce medication error rates: a systematic review.

    Science.gov (United States)

    Ohashi, Kumiko; Dalleur, Olivia; Dykes, Patricia C; Bates, David W

    2014-12-01

    Smart infusion pumps have been introduced to prevent medication errors and have been widely adopted nationally in the USA, though they are not always used in Europe or other regions. Despite widespread usage of smart pumps, intravenous medication errors have not been fully eliminated. Through a systematic review of recent studies and reports regarding smart pump implementation and use, we aimed to identify the impact of smart pumps on error reduction and on the complex process of medication administration, and strategies to maximize the benefits of smart pumps. The medical literature related to the effects of smart pumps for improving patient safety was searched in PUBMED, EMBASE, and the Cochrane Central Register of Controlled Trials (CENTRAL) (2000-2014) and relevant papers were selected by two researchers. After the literature search, 231 papers were identified and the full texts of 138 articles were assessed for eligibility. Of these, 22 were included after removal of papers that did not meet the inclusion criteria. We assessed both the benefits and negative effects of smart pumps from these studies. One of the benefits of using smart pumps was intercepting errors such as the wrong rate, wrong dose, and pump setting errors. Other benefits include reduction of adverse drug event rates, practice improvements, and cost effectiveness. Meanwhile, the current issues or negative effects related to using smart pumps were lower compliance rates of using smart pumps, the overriding of soft alerts, non-intercepted errors, or the possibility of using the wrong drug library. The literature suggests that smart pumps reduce but do not eliminate programming errors. Although the hard limits of a drug library play a main role in intercepting medication errors, soft limits were still not as effective as hard limits because of high override rates. Compliance in using smart pumps is key towards effectively preventing errors. Opportunities for improvement include upgrading drug

  8. Barriers to reporting medication errors and near misses among nurses: A systematic review.

    Science.gov (United States)

    Vrbnjak, Dominika; Denieffe, Suzanne; O'Gorman, Claire; Pajnkihar, Majda

    2016-11-01

    To explore barriers to nurses' reporting of medication errors and near misses in hospital settings. Systematic review. Medline, CINAHL, PubMed and Cochrane Library in addition to Google and Google Scholar and reference lists of relevant studies published in English between January 1981 and April 2015 were searched for relevant qualitative, quantitative or mixed methods empirical studies or unpublished PhD theses. Papers with a primary focus on barriers to reporting medication errors and near misses in nursing were included. The titles and abstracts of the search results were assessed for eligibility and relevance by one of the authors. After retrieval of the full texts, two of the authors independently made decisions concerning the final inclusion and these were validated by the third reviewer. Three authors independently assessed methodological quality of studies. Relevant data were extracted and findings were synthesised using thematic synthesis. From 4038 identified records, 38 studies were included in the synthesis. Findings suggest that organizational barriers such as culture, the reporting system and management behaviour in addition to personal and professional barriers such as fear, accountability and characteristics of nurses are barriers to reporting medication errors. To overcome reported barriers it is necessary to develop a non-blaming, non-punitive and non-fearful learning culture at unit and organizational level. Anonymous, effective, uncomplicated and efficient reporting systems and supportive management behaviour that provides open feedback to nurses is needed. Nurses are accountable for patients' safety, so they need to be educated and skilled in error management. Lack of research into barriers to reporting of near misses' and low awareness of reporting suggests the need for further research and development of educational and management approaches to overcome these barriers. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Systematic Error of Acoustic Particle Image Velocimetry and Its Correction

    Directory of Open Access Journals (Sweden)

    Mickiewicz Witold

    2014-08-01

    Full Text Available Particle Image Velocimetry is getting more and more often the method of choice not only for visualization of turbulent mass flows in fluid mechanics, but also in linear and non-linear acoustics for non-intrusive visualization of acoustic particle velocity. Particle Image Velocimetry with low sampling rate (about 15Hz can be applied to visualize the acoustic field using the acquisition synchronized to the excitation signal. Such phase-locked PIV technique is described and used in experiments presented in the paper. The main goal of research was to propose a model of PIV systematic error due to non-zero time interval between acquisitions of two images of the examined sound field seeded with tracer particles, what affects the measurement of complex acoustic signals. Usefulness of the presented model is confirmed experimentally. The correction procedure, based on the proposed model, applied to measurement data increases the accuracy of acoustic particle velocity field visualization and creates new possibilities in observation of sound fields excited with multi-tonal or band-limited noise signals.

  10. Effect of repeated transsphenoidal surgery in recurrent or residual pituitary adenomas: A systematic review and meta-analysis

    Science.gov (United States)

    Heringer, Lindolfo Carlos; de Oliveira, Matheus Fernandes; Rotta, José Marcus; Botelho, Ricardo Vieira

    2016-01-01

    Background: Recurrent or residual pituitary adenomas previously treated by transsphenoidal surgery are not uncommon. There are no strongly established guidelines to perform treatment of such cases. The objective of this study is to elucidate the effect of transsphenoidal reoperation in residual or recurrent pituitary adenomas. Methods: We made a systematic review of the literature to elucidate this effect through electronic search in MEDLINE/PubMed and Cochrane Central database. PRISMA statement was used as a basis for this systematic review and analysis of the risk of bias was made according to the Grading of Recommendations, Assessment, Development and Evaluation recommendations. Results: In this review, fifteen studies were finally pooled analyzed. Although remission rates (RRs) and follow-up periods varied widely, from 149 patients with growth hormone-secreting tumors the mean RR was 44.5%, from 273 patients with adrenocorticotropic hormone-secreting tumors the mean RR was 55.5% and among 173 patients with nonsecreting tumors, RR was 76.1%. There was significant higher RR in nonsecreting tumors. Mean follow-up was 32.1 months. No difference was found between microscopic and endoscopic techniques. Conclusions: A second transsphenoidal surgery is accompanied by a chance of remission in approximately half of cases with secreting tumors. In nonsecreting ones, success is higher. PMID:26958420

  11. Constituent quarks and systematic errors in mid-rapidity charged multiplicity dNch/dη distributions

    Science.gov (United States)

    Tannenbaum, M. J.

    2018-01-01

    Centrality definition in A + A collisions at colliders such as RHIC and LHC suffers from a correlated systematic uncertainty caused by the efficiency of detecting a p + p collision (50 ± 5% for PHENIX at RHIC). In A + A collisions where centrality is measured by the number of nucleon collisions, Ncoll, or the number of nucleon participants, Npart, or the number of constituent quark participants, Nqp, the error in the efficiency of the primary interaction trigger (Beam-Beam Counters) for a p + p collision leads to a correlated systematic uncertainty in Npart, Ncoll or Nqp which reduces binomially as the A + A collisions become more central. If this is not correctly accounted for in projections of A + A to p + p collisions, then mistaken conclusions can result. A recent example is presented in whether the mid-rapidity charged multiplicity per constituent quark participant (dNch/dη)/Nqp in Au + Au at RHIC was the same as the value in p + p collisions.

  12. Errors in Computing the Normalized Protein Catabolic Rate due to Use of Single-pool Urea Kinetic Modeling or to Omission of the Residual Kidney Urea Clearance.

    Science.gov (United States)

    Daugirdas, John T

    2017-07-01

    The protein catabolic rate normalized to body size (PCRn) often is computed in dialysis units to obtain information about protein ingestion. However, errors can manifest when inappropriate modeling methods are used. We used a variable volume 2-pool urea kinetic model to examine the percent errors in PCRn due to use of a 1-pool urea kinetic model or after omission of residual urea clearance (Kru). When a single-pool model was used, 2 sources of errors were identified. The first, dependent on the ratio of dialyzer urea clearance to urea distribution volume (K/V), resulted in a 7% inflation of the PCRn when K/V was in the range of 6 mL/min per L. A second, larger error appeared when Kt/V values were below 1.0 and was related to underestimation of urea distribution volume (due to overestimation of effective clearance) by the single-pool model. A previously reported prediction equation for PCRn was valid, but data suggest that it should be modified using 2-pool eKt/V and V coefficients instead of single-pool values. A third source of error, this one unrelated to use of a single-pool model, namely omission of Kru, was shown to result in an underestimation of PCRn, such that each ml/minute Kru per 35 L of V caused a 5.6% underestimate in PCRn. Marked overestimation of PCRn can result due to inappropriate use of a single-pool urea kinetic model, particularly when Kt/V <1.0 (as in short daily dialysis), or after omission of residual native kidney clearance. Copyright © 2017 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  13. Error floor behavior study of LDPC codes for concatenated codes design

    Science.gov (United States)

    Chen, Weigang; Yin, Liuguo; Lu, Jianhua

    2007-11-01

    Error floor behavior of low-density parity-check (LDPC) codes using quantized decoding algorithms is statistically studied with experimental results on a hardware evaluation platform. The results present the distribution of the residual errors after decoding failure and reveal that the number of residual error bits in a codeword is usually very small using quantized sum-product (SP) algorithm. Therefore, LDPC code may serve as the inner code in a concatenated coding system with a high code rate outer code and thus an ultra low error floor can be achieved. This conclusion is also verified by the experimental results.

  14. Analysis of residual toluene in food packaging via headspace extraction method using gas chromatography

    International Nuclear Information System (INIS)

    Lim, Ying Chin; Mohd Marsin Sanagi

    2008-01-01

    Polymeric materials are used in many food contact applications as packaging material. The presence of residual toluene in this food packaging material can migrate into food and thus affect the quality of food. In this study, a manual headspace analysis was successfully designed and developed. The determination of residual toluene was carried out with standard addition method and multiple headspace extraction, MHE) method using gas chromatography-flame ionization detector, GC-FID). Identification of toluene was performed by comparison of its retention time with standard toluene and GC-MS. It was found that the suitable heating temperature was 180 degree Celsius with an optimum heating time of 10 minutes. The study also found that the concentration of residual toluene in multicolored sample was higher compared to mono colored sample whereas residual toluene in sample analyzed using standard addition method was higher compared to MHE method. However, comparison with the results obtained from De Paris laboratory, France found that MHE method gave higher accuracy for sample with low analyte concentration. On the other hand, lower accuracy was obtained for sample with high concentration of residual toluene due to systematic errors. Comparison between determination methods showed that MHE method is more precise compared to standard addition method. (author)

  15. Human errors and mistakes

    International Nuclear Information System (INIS)

    Wahlstroem, B.

    1993-01-01

    Human errors have a major contribution to the risks for industrial accidents. Accidents have provided important lesson making it possible to build safer systems. In avoiding human errors it is necessary to adapt the systems to their operators. The complexity of modern industrial systems is however increasing the danger of system accidents. Models of the human operator have been proposed, but the models are not able to give accurate predictions of human performance. Human errors can never be eliminated, but their frequency can be decreased by systematic efforts. The paper gives a brief summary of research in human error and it concludes with suggestions for further work. (orig.)

  16. ERESYE - a expert system for the evaluation of uncertainties related to systematic experimental errors; ERESYE - un sistema esperto per la valutazione di incertezze correlate ad errori sperimentali sistematici

    Energy Technology Data Exchange (ETDEWEB)

    Martinelli, T; Panini, G C [ENEA - Dipartimento Tecnologie Intersettoriali di Base, Centro Ricerche Energia, Casaccia (Italy); Amoroso, A [Ricercatore Ospite (Italy)

    1989-11-15

    Information about systematic errors are not given In EXFOR, the data base of nuclear experimental measurements: their assessment is committed to the ability of the evaluator. A tool Is needed which performs this task in a fully automatic way or, at least, gives a valuable aid. The expert system ERESYE has been implemented for investigating the feasibility of an automatic evaluation of the systematic errors in the experiments. The features of the project which led to the implementation of the system are presented. (author)

  17. Cover crop residue management for optimizing weed control

    NARCIS (Netherlands)

    Kruidhof, H.M.; Bastiaans, L.; Kropff, M.J.

    2009-01-01

    Although residue management seems a key factor in residue-mediated weed suppression, very few studies have systematically compared the influence of different residue management strategies on the establishment of crop and weed species. We evaluated the effect of several methods of pre-treatment and

  18. Systematic instrumental errors between oxygen saturation analysers in fetal blood during deep hypoxemia.

    Science.gov (United States)

    Porath, M; Sinha, P; Dudenhausen, J W; Luttkus, A K

    2001-05-01

    During a study of artificially produced deep hypoxemia in fetal cord blood, systematic errors of three different oxygen saturation analysers were evaluated against a reference CO oximeter. The oxygen tensions (PO2) of 83 pre-heparinized fetal blood samples from umbilical veins were reduced by tonometry to 1.3 kPa (10 mm Hg) and 2.7 kPa (20 mm Hg). The oxygen saturation (SO2) was determined (n=1328) on a reference CO oximeter (ABL625, Radiometer Copenhagen) and on three tested instruments (two CO oximeters: Chiron865, Bayer Diagnostics; ABL700, Radiometer Copenhagen, and a portable blood gas analyser, i-STAT, Abbott). The CO oximeters measure the oxyhemoglobin and the reduced hemoglobin fractions by absorption spectrophotometry. The i-STAT system calculates the oxygen saturation from the measured pH, PO2, and PCO2. The measurements were performed in duplicate. Statistical evaluation focused on the differences between duplicate measurements and on systematic instrumental errors in oxygen saturation analysis compared to the reference CO oximeter. After tonometry, the median saturation dropped to 32.9% at a PO2=2.7 kPa (20 mm Hg), defined as saturation range 1, and to 10% SO2 at a PO2=1.3 kPa (10 mm Hg), defined as range 2. With decreasing SO2, all devices showed an increased difference between duplicate measurements. ABL625 and ABL700 showed the closest agreement between instruments (0.25% SO2 bias at saturation range 1 and -0.33% SO2 bias at saturation range 2). Chiron865 indicated higher saturation values than ABL 625 (3.07% SO2 bias at saturation range 1 and 2.28% SO2 bias at saturation range 2). Calculated saturation values (i-STAT) were more than 30% lower than the measured values of ABL625. The disagreement among CO oximeters was small but increasing under deep hypoxemia. Calculation found unacceptably low saturation.

  19. Random error in cardiovascular meta-analyses

    DEFF Research Database (Denmark)

    Albalawi, Zaina; McAlister, Finlay A; Thorlund, Kristian

    2013-01-01

    BACKGROUND: Cochrane reviews are viewed as the gold standard in meta-analyses given their efforts to identify and limit systematic error which could cause spurious conclusions. The potential for random error to cause spurious conclusions in meta-analyses is less well appreciated. METHODS: We exam...

  20. Accounting for measurement error: a critical but often overlooked process.

    Science.gov (United States)

    Harris, Edward F; Smith, Richard N

    2009-12-01

    Due to instrument imprecision and human inconsistencies, measurements are not free of error. Technical error of measurement (TEM) is the variability encountered between dimensions when the same specimens are measured at multiple sessions. A goal of a data collection regimen is to minimise TEM. The few studies that actually quantify TEM, regardless of discipline, report that it is substantial and can affect results and inferences. This paper reviews some statistical approaches for identifying and controlling TEM. Statistically, TEM is part of the residual ('unexplained') variance in a statistical test, so accounting for TEM, which requires repeated measurements, enhances the chances of finding a statistically significant difference if one exists. The aim of this paper was to review and discuss common statistical designs relating to types of error and statistical approaches to error accountability. This paper addresses issues of landmark location, validity, technical and systematic error, analysis of variance, scaled measures and correlation coefficients in order to guide the reader towards correct identification of true experimental differences. Researchers commonly infer characteristics about populations from comparatively restricted study samples. Most inferences are statistical and, aside from concerns about adequate accounting for known sources of variation with the research design, an important source of variability is measurement error. Variability in locating landmarks that define variables is obvious in odontometrics, cephalometrics and anthropometry, but the same concerns about measurement accuracy and precision extend to all disciplines. With increasing accessibility to computer-assisted methods of data collection, the ease of incorporating repeated measures into statistical designs has improved. Accounting for this technical source of variation increases the chance of finding biologically true differences when they exist.

  1. Comparison between calorimeter and HLNC errors

    International Nuclear Information System (INIS)

    Goldman, A.S.; De Ridder, P.; Laszlo, G.

    1991-01-01

    This paper summarizes an error analysis that compares systematic and random errors of total plutonium mass estimated for high-level neutron coincidence counter (HLNC) and calorimeter measurements. This task was part of an International Atomic Energy Agency (IAEA) study on the comparison of the two instruments to determine if HLNC measurement errors met IAEA standards and if the calorimeter gave ''significantly'' better precision. Our analysis was based on propagation of error models that contained all known sources of errors including uncertainties associated with plutonium isotopic measurements. 5 refs., 2 tabs

  2. Error calculations statistics in radioactive measurements

    International Nuclear Information System (INIS)

    Verdera, Silvia

    1994-01-01

    Basic approach and procedures frequently used in the practice of radioactive measurements.Statistical principles applied are part of Good radiopharmaceutical Practices and quality assurance.Concept of error, classification as systematic and random errors.Statistic fundamentals,probability theories, populations distributions, Bernoulli, Poisson,Gauss, t-test distribution,Ξ2 test, error propagation based on analysis of variance.Bibliography.z table,t-test table, Poisson index ,Ξ2 test

  3. An Empirical State Error Covariance Matrix Orbit Determination Example

    Science.gov (United States)

    Frisbee, Joseph H., Jr.

    2015-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance

  4. Modeling the North American vertical datum of 1988 errors in the conterminous United States

    Science.gov (United States)

    Li, X.

    2018-02-01

    A large systematic difference (ranging from -20 cm to +130 cm) was found between NAVD 88 (North AmericanVertical Datum of 1988) and the pure gravimetric geoid models. This difference not only makes it very difficult to augment the local geoid model by directly using the vast NAVD 88 network with state-of-the-art technologies recently developed in geodesy, but also limits the ability of researchers to effectively demonstrate the geoid model improvements on the NAVD 88 network. Here, both conventional regression analyses based on various predefined basis functions such as polynomials, B-splines, and Legendre functions and the Latent Variable Analysis (LVA) such as the Factor Analysis (FA) are used to analyze the systematic difference. Besides giving a mathematical model, the regression results do not reveal a great deal about the physical reasons that caused the large differences in NAVD 88, which may be of interest to various researchers. Furthermore, there is still a significant amount of no-Gaussian signals left in the residuals of the conventional regression models. On the other side, the FA method not only provides a better not of the data, but also offers possible explanations of the error sources. Without requiring extra hypothesis tests on the model coefficients, the results from FA are more efficient in terms of capturing the systematic difference. Furthermore, without using a covariance model, a novel interpolating method based on the relationship between the loading matrix and the factor scores is developed for predictive purposes. The prediction error analysis shows that about 3-7 cm precision is expected in NAVD 88 after removing the systematic difference.

  5. Modeling the North American vertical datum of 1988 errors in the conterminous United States

    Directory of Open Access Journals (Sweden)

    Li X.

    2018-02-01

    Full Text Available A large systematic difference (ranging from −20 cm to +130 cm was found between NAVD 88 (North AmericanVertical Datum of 1988 and the pure gravimetric geoid models. This difference not only makes it very difficult to augment the local geoid model by directly using the vast NAVD 88 network with state-of-the-art technologies recently developed in geodesy, but also limits the ability of researchers to effectively demonstrate the geoid model improvements on the NAVD 88 network. Here, both conventional regression analyses based on various predefined basis functions such as polynomials, B-splines, and Legendre functions and the Latent Variable Analysis (LVA such as the Factor Analysis (FA are used to analyze the systematic difference. Besides giving a mathematical model, the regression results do not reveal a great deal about the physical reasons that caused the large differences in NAVD 88, which may be of interest to various researchers. Furthermore, there is still a significant amount of no-Gaussian signals left in the residuals of the conventional regression models. On the other side, the FA method not only provides a better not of the data, but also offers possible explanations of the error sources. Without requiring extra hypothesis tests on the model coefficients, the results from FA are more efficient in terms of capturing the systematic difference. Furthermore, without using a covariance model, a novel interpolating method based on the relationship between the loading matrix and the factor scores is developed for predictive purposes. The prediction error analysis shows that about 3-7 cm precision is expected in NAVD 88 after removing the systematic difference.

  6. The effect of errors in charged particle beams

    International Nuclear Information System (INIS)

    Carey, D.C.

    1987-01-01

    Residual errors in a charged particle optical system determine how well the performance of the system conforms to the theory on which it is based. Mathematically possible optical modes can sometimes be eliminated as requiring precisions not attainable. Other plans may require introduction of means of correction for the occurrence of various errors. Error types include misalignments, magnet fabrication precision limitations, and magnet current regulation errors. A thorough analysis of a beam optical system requires computer simulation of all these effects. A unified scheme for the simulation of errors and their correction is discussed

  7. Quantification and handling of sampling errors in instrumental measurements: a case study

    DEFF Research Database (Denmark)

    Andersen, Charlotte Møller; Bro, R.

    2004-01-01

    in certain situations, the effect of systematic errors is also considerable. The relevant errors contributing to the prediction error are: error in instrumental measurements (x-error), error in reference measurements (y-error), error in the estimated calibration model (regression coefficient error) and model...

  8. Measurement of Systematic Error Effects for a Sensitive Storage Ring EDM Polarimeter

    Science.gov (United States)

    Imig, Astrid; Stephenson, Edward

    2009-10-01

    The Storage Ring EDM Collaboration was using the Cooler Synchrotron (COSY) and the EDDA detector at the Forschungszentrum J"ulich to explore systematic errors in very sensitive storage-ring polarization measurements. Polarized deuterons of 235 MeV were used. The analyzer target was a block of 17 mm thick carbon placed close to the beam so that white noise applied to upstream electrostatic plates increases the vertical phase space of the beam, allowing deuterons to strike the front face of the block. For a detector acceptance that covers laboratory angles larger than 9 ^o, the efficiency for particles to scatter into the polarimeter detectors was about 0.1% (all directions) and the vector analyzing power was about 0.2. Measurements were made of the sensitivity of the polarization measurement to beam position and angle. Both vector and tensor asymmetries were measured using beams with both vector and tensor polarization. Effects were seen that depend upon both the beam geometry and the data rate in the detectors.

  9. Investigation of Primary Mirror Segment's Residual Errors for the Thirty Meter Telescope

    Science.gov (United States)

    Seo, Byoung-Joon; Nissly, Carl; Angeli, George; MacMynowski, Doug; Sigrist, Norbert; Troy, Mitchell; Williams, Eric

    2009-01-01

    The primary mirror segment aberrations after shape corrections with warping harness have been identified as the single largest error term in the Thirty Meter Telescope (TMT) image quality error budget. In order to better understand the likely errors and how they will impact the telescope performance we have performed detailed simulations. We first generated unwarped primary mirror segment surface shapes that met TMT specifications. Then we used the predicted warping harness influence functions and a Shack-Hartmann wavefront sensor model to determine estimates for the 492 corrected segment surfaces that make up the TMT primary mirror. Surface and control parameters, as well as the number of subapertures were varied to explore the parameter space. The corrected segment shapes were then passed to an optical TMT model built using the Jet Propulsion Laboratory (JPL) developed Modeling and Analysis for Controlled Optical Systems (MACOS) ray-trace simulator. The generated exit pupil wavefront error maps provided RMS wavefront error and image-plane characteristics like the Normalized Point Source Sensitivity (PSSN). The results have been used to optimize the segment shape correction and wavefront sensor designs as well as provide input to the TMT systems engineering error budgets.

  10. Effect of patient setup errors on simultaneously integrated boost head and neck IMRT treatment plans

    International Nuclear Information System (INIS)

    Siebers, Jeffrey V.; Keall, Paul J.; Wu Qiuwen; Williamson, Jeffrey F.; Schmidt-Ullrich, Rupert K.

    2005-01-01

    Purpose: The purpose of this study is to determine dose delivery errors that could result from random and systematic setup errors for head-and-neck patients treated using the simultaneous integrated boost (SIB)-intensity-modulated radiation therapy (IMRT) technique. Methods and Materials: Twenty-four patients who participated in an intramural Phase I/II parotid-sparing IMRT dose-escalation protocol using the SIB treatment technique had their dose distributions reevaluated to assess the impact of random and systematic setup errors. The dosimetric effect of random setup error was simulated by convolving the two-dimensional fluence distribution of each beam with the random setup error probability density distribution. Random setup errors of σ = 1, 3, and 5 mm were simulated. Systematic setup errors were simulated by randomly shifting the patient isocenter along each of the three Cartesian axes, with each shift selected from a normal distribution. Systematic setup error distributions with Σ = 1.5 and 3.0 mm along each axis were simulated. Combined systematic and random setup errors were simulated for σ = Σ = 1.5 and 3.0 mm along each axis. For each dose calculation, the gross tumor volume (GTV) received by 98% of the volume (D 98 ), clinical target volume (CTV) D 90 , nodes D 90 , cord D 2 , and parotid D 50 and parotid mean dose were evaluated with respect to the plan used for treatment for the structure dose and for an effective planning target volume (PTV) with a 3-mm margin. Results: Simultaneous integrated boost-IMRT head-and-neck treatment plans were found to be less sensitive to random setup errors than to systematic setup errors. For random-only errors, errors exceeded 3% only when the random setup error σ exceeded 3 mm. Simulated systematic setup errors with Σ = 1.5 mm resulted in approximately 10% of plan having more than a 3% dose error, whereas a Σ = 3.0 mm resulted in half of the plans having more than a 3% dose error and 28% with a 5% dose error

  11. Measurement Error in Education and Growth Regressions

    NARCIS (Netherlands)

    Portela, Miguel; Alessie, Rob; Teulings, Coen

    2010-01-01

    The use of the perpetual inventory method for the construction of education data per country leads to systematic measurement error. This paper analyzes its effect on growth regressions. We suggest a methodology for correcting this error. The standard attenuation bias suggests that using these

  12. Error estimation for goal-oriented spatial adaptivity for the SN equations on triangular meshes

    International Nuclear Information System (INIS)

    Lathouwers, D.

    2011-01-01

    In this paper we investigate different error estimation procedures for use within a goal oriented adaptive algorithm for the S N equations on unstructured meshes. The method is based on a dual-weighted residual approach where an appropriate adjoint problem is formulated and solved in order to obtain the importance of residual errors in the forward problem on the specific goal of interest. The forward residuals and the adjoint function are combined to obtain both economical finite element meshes tailored to the solution of the target functional as well as providing error estimates. Various approximations made to make the calculation of the adjoint angular flux more economically attractive are evaluated by comparing the performance of the resulting adaptive algorithm and the quality of the error estimators when applied to two shielding-type test problems. (author)

  13. Error tracking in a clinical biochemistry laboratory

    DEFF Research Database (Denmark)

    Szecsi, Pal Bela; Ødum, Lars

    2009-01-01

    BACKGROUND: We report our results for the systematic recording of all errors in a standard clinical laboratory over a 1-year period. METHODS: Recording was performed using a commercial database program. All individuals in the laboratory were allowed to report errors. The testing processes were cl...

  14. The influence of random and systematic errors on a general definition of minimum detectable amount (MDA) applicable to all radiobioassay measurements

    International Nuclear Information System (INIS)

    Brodsky, A.

    1985-01-01

    An approach to defining minimum detectable amount (MDA) of radioactivity in a sample will be discussed, with the aim of obtaining comments helpful in developing a formulation of MDA that will be broadly applicable to all kinds of radiobioassay measurements, and acceptable to the scientists who make these measurements. Also, the influence of random and systematic errors on the defined MDA are examined

  15. A posteriori error estimator and AMR for discrete ordinates nodal transport methods

    International Nuclear Information System (INIS)

    Duo, Jose I.; Azmy, Yousry Y.; Zikatanov, Ludmil T.

    2009-01-01

    In the development of high fidelity transport solvers, optimization of the use of available computational resources and access to a tool for assessing quality of the solution are key to the success of large-scale nuclear systems' simulation. In this regard, error control provides the analyst with a confidence level in the numerical solution and enables for optimization of resources through Adaptive Mesh Refinement (AMR). In this paper, we derive an a posteriori error estimator based on the nodal solution of the Arbitrarily High Order Transport Method of the Nodal type (AHOT-N). Furthermore, by making assumptions on the regularity of the solution, we represent the error estimator as a function of computable volume and element-edges residuals. The global L 2 error norm is proved to be bound by the estimator. To lighten the computational load, we present a numerical approximation to the aforementioned residuals and split the global norm error estimator into local error indicators. These indicators are used to drive an AMR strategy for the spatial discretization. However, the indicators based on forward solution residuals alone do not bound the cell-wise error. The estimator and AMR strategy are tested in two problems featuring strong heterogeneity and highly transport streaming regime with strong flux gradients. The results show that the error estimator indeed bounds the global error norms and that the error indicator follows the cell-error's spatial distribution pattern closely. The AMR strategy proves beneficial to optimize resources, primarily by reducing the number of unknowns solved for to achieve prescribed solution accuracy in global L 2 error norm. Likewise, AMR achieves higher accuracy compared to uniform refinement when resolving sharp flux gradients, for the same number of unknowns

  16. Black hole spectroscopy: Systematic errors and ringdown energy estimates

    Science.gov (United States)

    Baibhav, Vishal; Berti, Emanuele; Cardoso, Vitor; Khanna, Gaurav

    2018-02-01

    The relaxation of a distorted black hole to its final state provides important tests of general relativity within the reach of current and upcoming gravitational wave facilities. In black hole perturbation theory, this phase consists of a simple linear superposition of exponentially damped sinusoids (the quasinormal modes) and of a power-law tail. How many quasinormal modes are necessary to describe waveforms with a prescribed precision? What error do we incur by only including quasinormal modes, and not tails? What other systematic effects are present in current state-of-the-art numerical waveforms? These issues, which are basic to testing fundamental physics with distorted black holes, have hardly been addressed in the literature. We use numerical relativity waveforms and accurate evolutions within black hole perturbation theory to provide some answers. We show that (i) a determination of the fundamental l =m =2 quasinormal frequencies and damping times to within 1% or better requires the inclusion of at least the first overtone, and preferably of the first two or three overtones; (ii) a determination of the black hole mass and spin with precision better than 1% requires the inclusion of at least two quasinormal modes for any given angular harmonic mode (ℓ , m ). We also improve on previous estimates and fits for the ringdown energy radiated in the various multipoles. These results are important to quantify theoretical (as opposed to instrumental) limits in parameter estimation accuracy and tests of general relativity allowed by ringdown measurements with high signal-to-noise ratio gravitational wave detectors.

  17. Estimation of Branch Topology Errors in Power Networks by WLAN State Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hong Rae [Soonchunhyang University(Korea); Song, Kyung Bin [Kei Myoung University(Korea)

    2000-06-01

    The purpose of this paper is to detect and identify topological errors in order to maintain a reliable database for the state estimator. In this paper, a two stage estimation procedure is used to identify the topology errors. At the first stage, the WLAV state estimator which has characteristics to remove bad data during the estimation procedure is run for finding out the suspected branches at which topology errors take place. The resulting residuals are normalized and the measurements with significant normalized residuals are selected. A set of suspected branches is formed based on these selected measurements; if the selected measurement if a line flow, the corresponding branch is suspected; if it is an injection, then all the branches connecting the injection bus to its immediate neighbors are suspected. A new WLAV state estimator adding the branch flow errors in the state vector is developed to identify the branch topology errors. Sample cases of single topology error and topology error with a measurement error are applied to IEEE 14 bus test system. (author). 24 refs., 1 fig., 9 tabs.

  18. Consequences of leaf calibration errors on IMRT delivery

    International Nuclear Information System (INIS)

    Sastre-Padro, M; Welleweerd, J; Malinen, E; Eilertsen, K; Olsen, D R; Heide, U A van der

    2007-01-01

    IMRT treatments using multi-leaf collimators may involve a large number of segments in order to spare the organs at risk. When a large proportion of these segments are small, leaf positioning errors may become relevant and have therapeutic consequences. The performance of four head and neck IMRT treatments under eight different cases of leaf positioning errors has been studied. Systematic leaf pair offset errors in the range of ±2.0 mm were introduced, thus modifying the segment sizes of the original IMRT plans. Thirty-six films were irradiated with the original and modified segments. The dose difference and the gamma index (with 2%/2 mm criteria) were used for evaluating the discrepancies between the irradiated films. The median dose differences were linearly related to the simulated leaf pair errors. In the worst case, a 2.0 mm error generated a median dose difference of 1.5%. Following the gamma analysis, two out of the 32 modified plans were not acceptable. In conclusion, small systematic leaf bank positioning errors have a measurable impact on the delivered dose and may have consequences for the therapeutic outcome of IMRT

  19. Heat transfer properties of organic coolants containing high boiling residues

    International Nuclear Information System (INIS)

    Debbage, A.G.; Driver, M.; Waller, P.R.

    1964-01-01

    Heat transfer measurements were made in forced convection with Santowax R, mixtures of Santowax R and pyrolytic high boiling residue, mixtures of Santowax R and CMRE Radiolytic high boiling residue, and OMRE coolant, in the range of Reynolds number 10 4 to 10 5 . The data was correlated with the equation Nu = 0.015 Re b 0.85 Pr b 0.4 with an r.m.s. error of ± 8.5%. The total maximum error arising from the experimental method and inherent errors in the physical property data has been estimated to be less than ± 8.5%. From the correlation and physical property data, the decrease in heat transfer coefficient with increasing high boiling residue concentration has been determined. It has been shown that subcooled boiling in organic coolants containing high boiling residues is a complex phenomenon and the advantages to be gained by operating a reactor in this region may be marginal. Gas bearing pumps used initially in these experiments were found to be unsuitable; a re-designed ball bearing system lubricated with a terphenyl mixture was found to operate successfully. (author)

  20. The application of SHERPA (Systematic Human Error Reduction and Prediction Approach) in the development of compensatory cognitive rehabilitation strategies for stroke patients with left and right brain damage.

    Science.gov (United States)

    Hughes, Charmayne M L; Baber, Chris; Bienkiewicz, Marta; Worthington, Andrew; Hazell, Alexa; Hermsdörfer, Joachim

    2015-01-01

    Approximately 33% of stroke patients have difficulty performing activities of daily living, often committing errors during the planning and execution of such activities. The objective of this study was to evaluate the ability of the human error identification (HEI) technique SHERPA (Systematic Human Error Reduction and Prediction Approach) to predict errors during the performance of daily activities in stroke patients with left and right hemisphere lesions. Using SHERPA we successfully predicted 36 of the 38 observed errors, with analysis indicating that the proportion of predicted and observed errors was similar for all sub-tasks and severity levels. HEI results were used to develop compensatory cognitive strategies that clinicians could employ to reduce or prevent errors from occurring. This study provides evidence for the reliability and validity of SHERPA in the design of cognitive rehabilitation strategies in stroke populations.

  1. Analysis of translational errors in frame-based and frameless cranial radiosurgery using an anthropomorphic phantom

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Taynna Vernalha Rocha [Faculdades Pequeno Principe (FPP), Curitiba, PR (Brazil); Cordova Junior, Arno Lotar; Almeida, Cristiane Maria; Piedade, Pedro Argolo; Silva, Cintia Mara da, E-mail: taynnavra@gmail.com [Centro de Radioterapia Sao Sebastiao, Florianopolis, SC (Brazil); Brincas, Gabriela R. Baseggio [Centro de Diagnostico Medico Imagem, Florianopolis, SC (Brazil); Marins, Priscila; Soboll, Danyel Scheidegger [Universidade Tecnologica Federal do Parana (UTFPR), Curitiba, PR (Brazil)

    2016-03-15

    Objective: To evaluate three-dimensional translational setup errors and residual errors in image-guided radiosurgery, comparing frameless and frame-based techniques, using an anthropomorphic phantom. Materials and Methods: We initially used specific phantoms for the calibration and quality control of the image-guided system. For the hidden target test, we used an Alderson Radiation Therapy (ART)-210 anthropomorphic head phantom, into which we inserted four 5- mm metal balls to simulate target treatment volumes. Computed tomography images were the taken with the head phantom properly positioned for frameless and frame-based radiosurgery. Results: For the frameless technique, the mean error magnitude was 0.22 ± 0.04 mm for setup errors and 0.14 ± 0.02 mm for residual errors, the combined uncertainty being 0.28 mm and 0.16 mm, respectively. For the frame-based technique, the mean error magnitude was 0.73 ± 0.14 mm for setup errors and 0.31 ± 0.04 mm for residual errors, the combined uncertainty being 1.15 mm and 0.63 mm, respectively. Conclusion: The mean values, standard deviations, and combined uncertainties showed no evidence of a significant differences between the two techniques when the head phantom ART-210 was used. (author)

  2. Understanding human management of automation errors

    Science.gov (United States)

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042

  3. Standard test method for verifying the alignment of X-Ray diffraction instrumentation for residual stress measurement

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This test method covers the preparation and use of a flat stress-free test specimen for the purpose of checking the systematic error caused by instrument misalignment or sample positioning in X-ray diffraction residual stress measurement, or both. 1.2 This test method is applicable to apparatus intended for X-ray diffraction macroscopic residual stress measurement in polycrystalline samples employing measurement of a diffraction peak position in the high-back reflection region, and in which the θ, 2θ, and ψ rotation axes can be made to coincide (see Fig. 1). 1.3 This test method describes the use of iron powder which has been investigated in round-robin studies for the purpose of verifying the alignment of instrumentation intended for stress measurement in ferritic or martensitic steels. To verify instrument alignment prior to stress measurement in other metallic alloys and ceramics, powder having the same or lower diffraction angle as the material to be measured should be prepared in similar fashion...

  4. A new stochastic model considering satellite clock interpolation errors in precise point positioning

    Science.gov (United States)

    Wang, Shengli; Yang, Fanlin; Gao, Wang; Yan, Lizi; Ge, Yulong

    2018-03-01

    Precise clock products are typically interpolated based on the sampling interval of the observational data when they are used for in precise point positioning. However, due to the occurrence of white noise in atomic clocks, a residual component of such noise will inevitable reside within the observations when clock errors are interpolated, and such noise will affect the resolution of the positioning results. In this paper, which is based on a twenty-one-week analysis of the atomic clock noise characteristics of numerous satellites, a new stochastic observation model that considers satellite clock interpolation errors is proposed. First, the systematic error of each satellite in the IGR clock product was extracted using a wavelet de-noising method to obtain the empirical characteristics of atomic clock noise within each clock product. Then, based on those empirical characteristics, a stochastic observation model was structured that considered the satellite clock interpolation errors. Subsequently, the IGR and IGS clock products at different time intervals were used for experimental validation. A verification using 179 stations worldwide from the IGS showed that, compared with the conventional model, the convergence times using the stochastic model proposed in this study were respectively shortened by 4.8% and 4.0% when the IGR and IGS 300-s-interval clock products were used and by 19.1% and 19.4% when the 900-s-interval clock products were used. Furthermore, the disturbances during the initial phase of the calculation were also effectively improved.

  5. An Empirical State Error Covariance Matrix for Batch State Estimation

    Science.gov (United States)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the

  6. Force Reproduction Error Depends on Force Level, whereas the Position Reproduction Error Does Not

    NARCIS (Netherlands)

    Onneweer, B.; Mugge, W.; Schouten, Alfred Christiaan

    2016-01-01

    When reproducing a previously perceived force or position humans make systematic errors. This study determined the effect of force level on force and position reproduction, when both target and reproduction force are self-generated with the same hand. Subjects performed force reproduction tasks at

  7. Residual stress analysis in carbon fiber-reinforced SiC ceramics

    International Nuclear Information System (INIS)

    Broda, M.

    1998-01-01

    Systematic residual stress analyses are reported, carried out in long-fiber reinforced SiC ceramics. The laminated C fiber /SiC matrix specimens used were prepared by polymer pyrolysis, and the structural component specimens used are industrial products. Various diffraction methods have been applied for non-destructive evaluation of residual stress fields, so as to completely detect the residual stresses and their distribution in the specimens. The residual stress fields at the surface (μm) have been measured using characteristic X-radiation and applying the sin 2 ψ method as well as the scatter vector method. For residual stress field analysis in the mass volume (cm), neutron diffraction has been applied. The stress fields in the fiber layers (approx. 250μm) have been measured as a function of their location within the laminated composite by using an energy-dispersive method and synchrotron radiation. By means of the systematic, process-accompanying residual stress and phase analyses, conclusions can be drawn as to possible approaches for optimization of fabrication parameters. (orig./CB) [de

  8. Determination of fission products and actinides by inductively coupled plasma-mass spectrometry using isotope dilution analysis. A study of random and systematic errors

    International Nuclear Information System (INIS)

    Ignacio Garcia Alonso, Jose

    1995-01-01

    The theory of the propagation of errors (random and systematic) for isotope dilution analysis (IDA) has been applied to the analysis of fission products and actinide elements by inductively coupled plasma-mass spectrometry (ICP-MS). Systematic errors in ID-ICP-MS arising from mass-discrimination (mass bias), detector non-linearity and isobaric interferences in the measured isotopes have to be corrected for in order to achieve accurate results. The mass bias factor and the detector dead-time can be determined by using natural elements with well-defined isotope abundances. A combined method for the simultaneous determination of both factors is proposed. On the other hand, isobaric interferences for some fission products and actinides cannot be eliminated using mathematical corrections (due to the unknown isotope abundances in the sample) and a chemical separation is necessary. The theory for random error propagation in IDA has been applied to the determination of non-natural elements by ICP-MS taking into account all possible sources of uncertainty with pulse counting detection. For the analysis of fission products, the selection of the right spike isotope composition and spike to sample ratio can be performed by applying conventional random propagation theory. However, it has been observed that, in the experimental determination of the isotope abundances of the fission product elements to be determined, the correction for mass-discrimination and the correction for detector dead-time losses contribute to the total random uncertainty. For the instrument used in the experimental part of this study, it was found that the random uncertainty on the measured isotope ratios followed Poisson statistics for low counting rates whereas, for high counting rates, source instability was the main source of error

  9. Residual volume on land and when immersed in water: effect on percent body fat.

    Science.gov (United States)

    Demura, Shinichi; Yamaji, Shunsuke; Kitabayashi, Tamotsu

    2006-08-01

    There is a large residual volume (RV) error when assessing percent body fat by means of hydrostatic weighing. It has generally been measured before hydrostatic weighing. However, an individual's maximal exhalations on land and in the water may not be identical. The aims of this study were to compare residual volumes and vital capacities on land and when immersed to the neck in water, and to examine the influence of the measurement error on percent body fat. The participants were 20 healthy Japanese males and 20 healthy Japanese females. To assess the influence of the RV error on percent body fat in both conditions and to evaluate the cross-validity of the prediction equation, another 20 males and 20 females were measured using hydrostatic weighing. Residual volume was measured on land and in the water using a nitrogen wash-out technique based on an open-circuit approach. In water, residual volume was measured with the participant sitting on a chair while the whole body, except the head, was submerged . The trial-to-trial reliabilities of residual volume in both conditions were very good (intraclass correlation coefficient > 0.98). Although residual volume measured under the two conditions did not agree completely, they showed a high correlation (males: 0.880; females: 0.853; P body fat computed using residual volume measured in both conditions was very good for both sexes (males: r = 0.902; females: r = 0.869, P body fat: -3.4 to 2.2% for males; -6.3 to 4.4% for females). We conclude that if these errors are of no importance, residual volume measured on land can be used when assessing body composition.

  10. SU-D-BRA-03: Analysis of Systematic Errors with 2D/3D Image Registration for Target Localization and Treatment Delivery in Stereotactic Radiosurgery

    International Nuclear Information System (INIS)

    Xu, H; Chetty, I; Wen, N

    2016-01-01

    Purpose: Determine systematic deviations between 2D/3D and 3D/3D image registrations with six degrees of freedom (6DOF) for various imaging modalities and registration algorithms on the Varian Edge Linac. Methods: The 6DOF systematic errors were assessed by comparing automated 2D/3D (kV/MV vs. CT) with 3D/3D (CBCT vs. CT) image registrations from different imaging pairs, CT slice thicknesses, couch angles, similarity measures, etc., using a Rando head and a pelvic phantom. The 2D/3D image registration accuracy was evaluated at different treatment sites (intra-cranial and extra-cranial) by statistically analyzing 2D/3D pre-treatment verification against 3D/3D localization of 192 Stereotactic Radiosurgery/Stereotactic Body Radiation Therapy treatment fractions for 88 patients. Results: The systematic errors of 2D/3D image registration using kV-kV, MV-kV and MV-MV image pairs using 0.8 mm slice thickness CT images were within 0.3 mm and 0.3° for translations and rotations with a 95% confidence interval (CI). No significant difference between 2D/3D and 3D/3D image registrations (P>0.05) was observed for target localization at various CT slice thicknesses ranging from 0.8 to 3 mm. Couch angles (30, 45, 60 degree) did not impact the accuracy of 2D/3D image registration. Using pattern intensity with content image filtering was recommended for 2D/3D image registration to achieve the best accuracy. For the patient study, translational error was within 2 mm and rotational error was within 0.6 degrees in terms of 95% CI for 2D/3D image registration. For intra-cranial sites, means and std. deviations of translational errors were −0.2±0.7, 0.04±0.5, 0.1±0.4 mm for LNG, LAT, VRT directions, respectively. For extra-cranial sites, means and std. deviations of translational errors were - 0.04±1, 0.2±1, 0.1±1 mm for LNG, LAT, VRT directions, respectively. 2D/3D image registration uncertainties for intra-cranial and extra-cranial sites were comparable. Conclusion: The Varian

  11. SU-D-BRA-03: Analysis of Systematic Errors with 2D/3D Image Registration for Target Localization and Treatment Delivery in Stereotactic Radiosurgery

    Energy Technology Data Exchange (ETDEWEB)

    Xu, H [Wayne State University, Detroit, MI (United States); Chetty, I; Wen, N [Henry Ford Health System, Detroit, MI (United States)

    2016-06-15

    Purpose: Determine systematic deviations between 2D/3D and 3D/3D image registrations with six degrees of freedom (6DOF) for various imaging modalities and registration algorithms on the Varian Edge Linac. Methods: The 6DOF systematic errors were assessed by comparing automated 2D/3D (kV/MV vs. CT) with 3D/3D (CBCT vs. CT) image registrations from different imaging pairs, CT slice thicknesses, couch angles, similarity measures, etc., using a Rando head and a pelvic phantom. The 2D/3D image registration accuracy was evaluated at different treatment sites (intra-cranial and extra-cranial) by statistically analyzing 2D/3D pre-treatment verification against 3D/3D localization of 192 Stereotactic Radiosurgery/Stereotactic Body Radiation Therapy treatment fractions for 88 patients. Results: The systematic errors of 2D/3D image registration using kV-kV, MV-kV and MV-MV image pairs using 0.8 mm slice thickness CT images were within 0.3 mm and 0.3° for translations and rotations with a 95% confidence interval (CI). No significant difference between 2D/3D and 3D/3D image registrations (P>0.05) was observed for target localization at various CT slice thicknesses ranging from 0.8 to 3 mm. Couch angles (30, 45, 60 degree) did not impact the accuracy of 2D/3D image registration. Using pattern intensity with content image filtering was recommended for 2D/3D image registration to achieve the best accuracy. For the patient study, translational error was within 2 mm and rotational error was within 0.6 degrees in terms of 95% CI for 2D/3D image registration. For intra-cranial sites, means and std. deviations of translational errors were −0.2±0.7, 0.04±0.5, 0.1±0.4 mm for LNG, LAT, VRT directions, respectively. For extra-cranial sites, means and std. deviations of translational errors were - 0.04±1, 0.2±1, 0.1±1 mm for LNG, LAT, VRT directions, respectively. 2D/3D image registration uncertainties for intra-cranial and extra-cranial sites were comparable. Conclusion: The Varian

  12. Error Resilient Video Compression Using Behavior Models

    Directory of Open Access Journals (Sweden)

    Jacco R. Taal

    2004-03-01

    Full Text Available Wireless and Internet video applications are inherently subjected to bit errors and packet errors, respectively. This is especially so if constraints on the end-to-end compression and transmission latencies are imposed. Therefore, it is necessary to develop methods to optimize the video compression parameters and the rate allocation of these applications that take into account residual channel bit errors. In this paper, we study the behavior of a predictive (interframe video encoder and model the encoders behavior using only the statistics of the original input data and of the underlying channel prone to bit errors. The resulting data-driven behavior models are then used to carry out group-of-pictures partitioning and to control the rate of the video encoder in such a way that the overall quality of the decoded video with compression and channel errors is optimized.

  13. Solow Residuals Without Capital Stocks

    DEFF Research Database (Denmark)

    Burda, Michael C.; Severgnini, Battista

    2014-01-01

    We use synthetic data generated by a prototypical stochastic growth model to assess the accuracy of the Solow residual (Solow, 1957) as a measure of total factor productivity (TFP) growth when the capital stock in use is measured with error. We propose two alternative measurements based on curren...

  14. Analysis of field errors in existing undulators

    International Nuclear Information System (INIS)

    Kincaid, B.M.

    1990-01-01

    The Advanced Light Source (ALS) and other third generation synchrotron light sources have been designed for optimum performance with undulator insertion devices. The performance requirements for these new undulators are explored, with emphasis on the effects of errors on source spectral brightness. Analysis of magnetic field data for several existing hybrid undulators is presented, decomposing errors into systematic and random components. An attempts is made to identify the sources of these errors, and recommendations are made for designing future insertion devices. 12 refs., 16 figs

  15. In-Situ Systematic Error Correction for Digital Volume Correlation Using a Reference Sample

    KAUST Repository

    Wang, B.

    2017-11-27

    The self-heating effect of a laboratory X-ray computed tomography (CT) scanner causes slight change in its imaging geometry, which induces translation and dilatation (i.e., artificial displacement and strain) in reconstructed volume images recorded at different times. To realize high-accuracy internal full-field deformation measurements using digital volume correlation (DVC), these artificial displacements and strains associated with unstable CT imaging must be eliminated. In this work, an effective and easily implemented reference sample compensation (RSC) method is proposed for in-situ systematic error correction in DVC. The proposed method utilizes a stationary reference sample, which is placed beside the test sample to record the artificial displacement fields caused by the self-heating effect of CT scanners. The detected displacement fields are then fitted by a parametric polynomial model, which is used to remove the unwanted artificial deformations in the test sample. Rescan tests of a stationary sample and real uniaxial compression tests performed on copper foam specimens demonstrate the accuracy, efficacy, and practicality of the presented RSC method.

  16. In-Situ Systematic Error Correction for Digital Volume Correlation Using a Reference Sample

    KAUST Repository

    Wang, B.; Pan, B.; Lubineau, Gilles

    2017-01-01

    The self-heating effect of a laboratory X-ray computed tomography (CT) scanner causes slight change in its imaging geometry, which induces translation and dilatation (i.e., artificial displacement and strain) in reconstructed volume images recorded at different times. To realize high-accuracy internal full-field deformation measurements using digital volume correlation (DVC), these artificial displacements and strains associated with unstable CT imaging must be eliminated. In this work, an effective and easily implemented reference sample compensation (RSC) method is proposed for in-situ systematic error correction in DVC. The proposed method utilizes a stationary reference sample, which is placed beside the test sample to record the artificial displacement fields caused by the self-heating effect of CT scanners. The detected displacement fields are then fitted by a parametric polynomial model, which is used to remove the unwanted artificial deformations in the test sample. Rescan tests of a stationary sample and real uniaxial compression tests performed on copper foam specimens demonstrate the accuracy, efficacy, and practicality of the presented RSC method.

  17. Avoiding Systematic Errors in Isometric Squat-Related Studies without Pre-Familiarization by Using Sufficient Numbers of Trials

    Directory of Open Access Journals (Sweden)

    Pekünlü Ekim

    2014-10-01

    Full Text Available There is no scientific evidence in the literature indicating that maximal isometric strength measures can be assessed within 3 trials. We questioned whether the results of isometric squat-related studies in which maximal isometric squat strength (MISS testing was performed using limited numbers of trials without pre-familiarization might have included systematic errors, especially those resulting from acute learning effects. Forty resistance-trained male participants performed 8 isometric squat trials without pre-familiarization. The highest measures in the first “n” trials (3 ≤ n ≤ 8 of these 8 squats were regarded as MISS obtained using 6 different MISS test methods featuring different numbers of trials (The Best of n Trials Method [BnT]. When B3T and B8T were paired with other methods, high reliability was found between the paired methods in terms of intraclass correlation coefficients (0.93-0.98 and coefficients of variation (3.4-7.0%. The Wilcoxon’s signed rank test indicated that MISS obtained using B3T and B8T were lower (p < 0.001 and higher (p < 0.001, respectively, than those obtained using other methods. The Bland- Altman method revealed a lack of agreement between any of the paired methods. Simulation studies illustrated that increasing the number of trials to 9-10 using a relatively large sample size (i.e., ≥ 24 could be an effective means of obtaining the actual MISS values of the participants. The common use of a limited number of trials in MISS tests without pre-familiarization appears to have no solid scientific base. Our findings suggest that the number of trials should be increased in commonly used MISS tests to avoid learning effect-related systematic errors

  18. Cone-Beam CT Assessment of Interfraction and Intrafraction Setup Error of Two Head-and-Neck Cancer Thermoplastic Masks

    International Nuclear Information System (INIS)

    Velec, Michael; Waldron, John N.; O'Sullivan, Brian; Bayley, Andrew; Cummings, Bernard; Kim, John J.; Ringash, Jolie; Breen, Stephen L.; Lockwood, Gina A.; Dawson, Laura A.

    2010-01-01

    Purpose: To prospectively compare setup error in standard thermoplastic masks and skin-sparing masks (SSMs) modified with low neck cutouts for head-and-neck intensity-modulated radiation therapy (IMRT) patients. Methods and Materials: Twenty head-and-neck IMRT patients were randomized to be treated in a standard mask (SM) or SSM. Cone-beam computed tomography (CBCT) scans, acquired daily after both initial setup and any repositioning, were used for initial and residual interfraction evaluation, respectively. Weekly, post-IMRT CBCT scans were acquired for intrafraction setup evaluation. The population random (σ) and systematic (Σ) errors were compared for SMs and SSMs. Skin toxicity was recorded weekly by use of Radiation Therapy Oncology Group criteria. Results: We evaluated 762 CBCT scans in 11 patients randomized to the SM and 9 to the SSM. Initial interfraction σ was 1.6 mm or less or 1.1 deg. or less for SM and 2.0 mm or less and 0.8 deg. for SSM. Initial interfraction Σ was 1.0 mm or less or 1.4 deg. or less for SM and 1.1 mm or less or 0.9 deg. or less for SSM. These errors were reduced before IMRT with CBCT image guidance with no significant differences in residual interfraction or intrafraction uncertainties between SMs and SSMs. Intrafraction σ and Σ were less than 1 mm and less than 1 deg. for both masks. Less severe skin reactions were observed in the cutout regions of the SSM compared with non-cutout regions. Conclusions: Interfraction and intrafraction setup error is not significantly different for SSMs and conventional masks in head-and-neck radiation therapy. Mask cutouts should be considered for these patients in an effort to reduce skin toxicity.

  19. Does the GPM mission improve the systematic error component in satellite rainfall estimates over TRMM? An evaluation at a pan-India scale

    Science.gov (United States)

    Beria, Harsh; Nanda, Trushnamayee; Singh Bisht, Deepak; Chatterjee, Chandranath

    2017-12-01

    The last couple of decades have seen the outburst of a number of satellite-based precipitation products with Tropical Rainfall Measuring Mission (TRMM) as the most widely used for hydrologic applications. Transition of TRMM into the Global Precipitation Measurement (GPM) promises enhanced spatio-temporal resolution along with upgrades to sensors and rainfall estimation techniques. The dependence of systematic error components in rainfall estimates of the Integrated Multi-satellitE Retrievals for GPM (IMERG), and their variation with climatology and topography, was evaluated over 86 basins in India for year 2014 and compared with the corresponding (2014) and retrospective (1998-2013) TRMM estimates. IMERG outperformed TRMM for all rainfall intensities across a majority of Indian basins, with significant improvement in low rainfall estimates showing smaller negative biases in 75 out of 86 basins. Low rainfall estimates in TRMM showed a systematic dependence on basin climatology, with significant overprediction in semi-arid basins, which gradually improved in the higher rainfall basins. Medium and high rainfall estimates of TRMM exhibited a strong dependence on basin topography, with declining skill in higher elevation basins. The systematic dependence of error components on basin climatology and topography was reduced in IMERG, especially in terms of topography. Rainfall-runoff modeling using the Variable Infiltration Capacity (VIC) model over two flood-prone basins (Mahanadi and Wainganga) revealed that improvement in rainfall estimates in IMERG did not translate into improvement in runoff simulations. More studies are required over basins in different hydroclimatic zones to evaluate the hydrologic significance of IMERG.

  20. Does the GPM mission improve the systematic error component in satellite rainfall estimates over TRMM? An evaluation at a pan-India scale

    Directory of Open Access Journals (Sweden)

    H. Beria

    2017-12-01

    Full Text Available The last couple of decades have seen the outburst of a number of satellite-based precipitation products with Tropical Rainfall Measuring Mission (TRMM as the most widely used for hydrologic applications. Transition of TRMM into the Global Precipitation Measurement (GPM promises enhanced spatio-temporal resolution along with upgrades to sensors and rainfall estimation techniques. The dependence of systematic error components in rainfall estimates of the Integrated Multi-satellitE Retrievals for GPM (IMERG, and their variation with climatology and topography, was evaluated over 86 basins in India for year 2014 and compared with the corresponding (2014 and retrospective (1998–2013 TRMM estimates. IMERG outperformed TRMM for all rainfall intensities across a majority of Indian basins, with significant improvement in low rainfall estimates showing smaller negative biases in 75 out of 86 basins. Low rainfall estimates in TRMM showed a systematic dependence on basin climatology, with significant overprediction in semi-arid basins, which gradually improved in the higher rainfall basins. Medium and high rainfall estimates of TRMM exhibited a strong dependence on basin topography, with declining skill in higher elevation basins. The systematic dependence of error components on basin climatology and topography was reduced in IMERG, especially in terms of topography. Rainfall-runoff modeling using the Variable Infiltration Capacity (VIC model over two flood-prone basins (Mahanadi and Wainganga revealed that improvement in rainfall estimates in IMERG did not translate into improvement in runoff simulations. More studies are required over basins in different hydroclimatic zones to evaluate the hydrologic significance of IMERG.

  1. Standard error propagation in R-matrix model fitting for light elements

    International Nuclear Information System (INIS)

    Chen Zhenpeng; Zhang Rui; Sun Yeying; Liu Tingjin

    2003-01-01

    The error propagation features with R-matrix model fitting 7 Li, 11 B and 17 O systems were researched systematically. Some laws of error propagation were revealed, an empirical formula P j = U j c / U j d = K j · S-bar · √m / √N for describing standard error propagation was established, the most likely error ranges for standard cross sections of 6 Li(n,t), 10 B(n,α0) and 10 B(n,α1) were estimated. The problem that the standard error of light nuclei standard cross sections may be too small results mainly from the R-matrix model fitting, which is not perfect. Yet R-matrix model fitting is the most reliable evaluation method for such data. The error propagation features of R-matrix model fitting for compound nucleus system of 7 Li, 11 B and 17 O has been studied systematically, some laws of error propagation are revealed, and these findings are important in solving the problem mentioned above. Furthermore, these conclusions are suitable for similar model fitting in other scientific fields. (author)

  2. Error analysis of the microradiographical determination of mineral content in mineralised tissue slices

    International Nuclear Information System (INIS)

    Jong, E. de J. de; Bosch, J.J. ten

    1985-01-01

    The microradiographic method, used to measure the mineral content in slices of mineralised tissues as a function of position, is analysed. The total error in the measured mineral content is split into systematic errors per microradiogram and random noise errors. These errors are measured quantitatively. Predominant contributions to systematic errors appear to be x-ray beam inhomogeneity, the determination of the step wedge thickness and stray light in the densitometer microscope, while noise errors are under the influence of the choice of film, the value of the optical film transmission of the microradiographic image and the area of the densitometer window. Optimisation criteria are given. The authors used these criteria, together with the requirement that the method be fast and easy to build an optimised microradiographic system. (author)

  3. Toward a Framework for Systematic Error Modeling of NASA Spaceborne Radar with NOAA/NSSL Ground Radar-Based National Mosaic QPE

    Science.gov (United States)

    Kirstettier, Pierre-Emmanual; Honh, Y.; Gourley, J. J.; Chen, S.; Flamig, Z.; Zhang, J.; Howard, K.; Schwaller, M.; Petersen, W.; Amitai, E.

    2011-01-01

    Characterization of the error associated to satellite rainfall estimates is a necessary component of deterministic and probabilistic frameworks involving space-born passive and active microwave measurement") for applications ranging from water budget studies to forecasting natural hazards related to extreme rainfall events. We focus here on the error structure of NASA's Tropical Rainfall Measurement Mission (TRMM) Precipitation Radar (PR) quantitative precipitation estimation (QPE) at ground. The problem is addressed by comparison of PR QPEs with reference values derived from ground-based measurements using NOAA/NSSL ground radar-based National Mosaic and QPE system (NMQ/Q2). A preliminary investigation of this subject has been carried out at the PR estimation scale (instantaneous and 5 km) using a three-month data sample in the southern part of US. The primary contribution of this study is the presentation of the detailed steps required to derive trustworthy reference rainfall dataset from Q2 at the PR pixel resolution. It relics on a bias correction and a radar quality index, both of which provide a basis to filter out the less trustworthy Q2 values. Several aspects of PR errors arc revealed and quantified including sensitivity to the processing steps with the reference rainfall, comparisons of rainfall detectability and rainfall rate distributions, spatial representativeness of error, and separation of systematic biases and random errors. The methodology and framework developed herein applies more generally to rainfall rate estimates from other sensors onboard low-earth orbiting satellites such as microwave imagers and dual-wavelength radars such as with the Global Precipitation Measurement (GPM) mission.

  4. Detecting errors in micro and trace analysis by using statistics

    DEFF Research Database (Denmark)

    Heydorn, K.

    1993-01-01

    By assigning a standard deviation to each step in an analytical method it is possible to predict the standard deviation of each analytical result obtained by this method. If the actual variability of replicate analytical results agrees with the expected, the analytical method is said...... to be in statistical control. Significant deviations between analytical results from different laboratories reveal the presence of systematic errors, and agreement between different laboratories indicate the absence of systematic errors. This statistical approach, referred to as the analysis of precision, was applied...

  5. Standard practice for construction of a stepped block and its use to estimate errors produced by speed-of-sound measurement systems for use on solids

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1999-01-01

    1.1 This practice provides a means for evaluating both systematic and random errors for ultrasonic speed-of-sound measurement systems which are used for evaluating material characteristics associated with residual stress and which may also be used for nondestructive measurements of the dynamic elastic moduli of materials. Important features and construction details of a reference block crucial to these error evaluations are described. This practice can be used whenever the precision and bias of sound speed values are in question. 1.2 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  6. Experiences of and support for nurses as second victims of adverse nursing errors: a qualitative systematic review.

    Science.gov (United States)

    Cabilan, C J; Kynoch, Kathryn

    2017-09-01

    Second victims are clinicians who have made adverse errors and feel traumatized by the experience. The current published literature on second victims is mainly representative of doctors, hence nurses' experiences are not fully depicted. This systematic review was necessary to understand the second victim experience for nurses, explore the support provided, and recommend appropriate support systems for nurses. To synthesize the best available evidence on nurses' experiences as second victims, and explore their experiences of the support they receive and the support they need. Participants were registered nurses who made adverse errors. The review included studies that described nurses' experiences as second victims and/or the support they received after making adverse errors. All studies conducted in any health care settings worldwide. The qualitative studies included were grounded theory, discourse analysis and phenomenology. A structured search strategy was used to locate all unpublished and published qualitative studies, but was limited to the English language, and published between 1980 and February 2017. The references of studies selected for eligibility screening were hand-searched for additional literature. Eligible studies were assessed by two independent reviewers for methodological quality using a standardized critical appraisal instrument from the Joanna Briggs Institute Qualitative Assessment and Review Instrument (JBI QARI). Themes and narrative statements were extracted from papers included in the review using the standardized data extraction tool from JBI QARI. Data synthesis was conducted using the Joanna Briggs Institute meta-aggregation approach. There were nine qualitative studies included in the review. The narratives of 284 nurses generated a total of 43 findings, which formed 15 categories based on similarity of meaning. Four synthesized findings were generated from the categories: (i) The error brings a considerable emotional burden to the

  7. Mars gravity field error analysis from simulated radio tracking of Mars Observer

    International Nuclear Information System (INIS)

    Smith, D.E.; Lerch, F.J.; Chan, J.C.; Chinn, D.S.; Iz, H.B.; Mallama, A.; Patel, G.B.

    1990-01-01

    The Mars Observer (MO) Mission, in a near-polar orbit at 360-410 km altitude for nearly a 2-year observing period, will greatly improve our understanding of the geophysics of Mars, including its gravity field. To assess the expected improvement of the gravity field, the authors have conducted an error analysis based upon the mission plan for the Mars Observer radio tracking data from the Deep Space Network. Their results indicate that it should be possible to obtain a high-resolution model (spherical harmonics complete to degree and order 50 corresponding to a 200-km horizontal resolution) for the gravitational field of the planet. This model, in combination with topography from MO altimetry, should provide for an improved determination of the broad scale density structure and stress state of the Martian crust and upper mantle. The mathematical model for the error analysis is based on the representation of doppler tracking data as a function of the Martian gravity field in spherical harmonics, solar radiation pressure, atmospheric drag, angular momentum desaturation residual acceleration (AMDRA) effects, tracking station biases, and the MO orbit parameters. Two approaches are employed. In the first case, the error covariance matrix of the gravity model is estimated including the effects from all the nongravitational parameters (noise-only case). In the second case, the gravity recovery error is computed as above but includes unmodelled systematic effects from atmospheric drag, AMDRA, and solar radiation pressure (biased case). The error spectrum of gravity shows an order of magnitude of improvement over current knowledge based on doppler data precision from a single station of 0.3 mm s -1 noise for 1-min integration intervals during three 60-day periods

  8. Residual-based Methods for Controlling Discretization Error in CFD

    Science.gov (United States)

    2015-08-24

    ccjccjccj iVi Jwxf V dVxf V 1 ,,, )(det)( 1)(1   . (25) where J is the Jacobian of the coordinate transformation and the weights can be found from...179. Layton, W., Lee , H.K., and Peterson, J. (2002). “A Defect-Correction Method for the Incompressible Navier-Stokes Equations,” Applied Mathematics...and Computation, Vol. 129, pp. 1-19. Lee , D. and Tsuei, Y.M. (1992). “A Formula for Estimation of Truncation Errors of Convective Terms in a

  9. Calibration Errors in Interferometric Radio Polarimetry

    Science.gov (United States)

    Hales, Christopher A.

    2017-08-01

    Residual calibration errors are difficult to predict in interferometric radio polarimetry because they depend on the observational calibration strategy employed, encompassing the Stokes vector of the calibrator and parallactic angle coverage. This work presents analytic derivations and simulations that enable examination of residual on-axis instrumental leakage and position-angle errors for a suite of calibration strategies. The focus is on arrays comprising alt-azimuth antennas with common feeds over which parallactic angle is approximately uniform. The results indicate that calibration schemes requiring parallactic angle coverage in the linear feed basis (e.g., the Atacama Large Millimeter/submillimeter Array) need only observe over 30°, beyond which no significant improvements in calibration accuracy are obtained. In the circular feed basis (e.g., the Very Large Array above 1 GHz), 30° is also appropriate when the Stokes vector of the leakage calibrator is known a priori, but this rises to 90° when the Stokes vector is unknown. These findings illustrate and quantify concepts that were previously obscure rules of thumb.

  10. Bayesian analysis of data and model error in rainfall-runoff hydrological models

    Science.gov (United States)

    Kavetski, D.; Franks, S. W.; Kuczera, G.

    2004-12-01

    A major unresolved issue in the identification and use of conceptual hydrologic models is realistic description of uncertainty in the data and model structure. In particular, hydrologic parameters often cannot be measured directly and must be inferred (calibrated) from observed forcing/response data (typically, rainfall and runoff). However, rainfall varies significantly in space and time, yet is often estimated from sparse gauge networks. Recent work showed that current calibration methods (e.g., standard least squares, multi-objective calibration, generalized likelihood uncertainty estimation) ignore forcing uncertainty and assume that the rainfall is known exactly. Consequently, they can yield strongly biased and misleading parameter estimates. This deficiency confounds attempts to reliably test model hypotheses, to generalize results across catchments (the regionalization problem) and to quantify predictive uncertainty when the hydrologic model is extrapolated. This paper continues the development of a Bayesian total error analysis (BATEA) methodology for the calibration and identification of hydrologic models, which explicitly incorporates the uncertainty in both the forcing and response data, and allows systematic model comparison based on residual model errors and formal Bayesian hypothesis testing (e.g., using Bayes factors). BATEA is based on explicit stochastic models for both forcing and response uncertainty, whereas current techniques focus solely on response errors. Hence, unlike existing methods, the BATEA parameter equations directly reflect the modeler's confidence in all the data. We compare several approaches to approximating the parameter distributions: a) full Markov Chain Monte Carlo methods and b) simplified approaches based on linear approximations. Studies using synthetic and real data from the US and Australia show that BATEA systematically reduces the parameter bias, leads to more meaningful model fits and allows model comparison taking

  11. SIMULATION OF INERTIAL NAVIGATION SYSTEM ERRORS AT AERIAL PHOTOGRAPHY FROM UAV

    Directory of Open Access Journals (Sweden)

    R. Shults

    2017-05-01

    Full Text Available The problem of accuracy determination of the UAV position using INS at aerial photography can be resolved in two different ways: modelling of measurement errors or in-field calibration for INS. The paper presents the results of INS errors research by mathematical modelling. In paper were considered the following steps: developing of INS computer model; carrying out INS simulation; using reference data without errors, estimation of errors and their influence on maps creation accuracy by UAV data. It must be remembered that the values of orientation angles and the coordinates of the projection centre may change abruptly due to the influence of the atmosphere (different air density, wind, etc.. Therefore, the mathematical model of the INS was constructed taking into account the use of different models of wind gusts. For simulation were used typical characteristics of micro electromechanical (MEMS INS and parameters of standard atmosphere. According to the simulation established domination of INS systematic errors that accumulate during the execution of photographing and require compensation mechanism, especially for orientation angles. MEMS INS have a high level of noise at the system input. Thanks to the developed model, we are able to investigate separately the impact of noise in the absence of systematic errors. According to the research was found that on the interval of observations in 5 seconds the impact of random and systematic component is almost the same. The developed model of INS errors studies was implemented in Matlab software environment and without problems can be improved and enhanced with new blocks.

  12. Prediction of the residual strength of clay using functional networks

    Directory of Open Access Journals (Sweden)

    S.Z. Khan

    2016-01-01

    Full Text Available Landslides are common natural hazards occurring in most parts of the world and have considerable adverse economic effects. Residual shear strength of clay is one of the most important factors in the determination of stability of slopes or landslides. This effect is more pronounced in sensitive clays which show large changes in shear strength from peak to residual states. This study analyses the prediction of the residual strength of clay based on a new prediction model, functional networks (FN using data available in the literature. The performance of FN was compared with support vector machine (SVM and artificial neural network (ANN based on statistical parameters like correlation coefficient (R, Nash--Sutcliff coefficient of efficiency (E, absolute average error (AAE, maximum average error (MAE and root mean square error (RMSE. Based on R and E parameters, FN is found to be a better prediction tool than ANN for the given data. However, the R and E values for FN are less than SVM. A prediction equation is presented that can be used by practicing geotechnical engineers. A sensitivity analysis is carried out to ascertain the importance of various inputs in the prediction of the output.

  13. Dosimetric implications of inter- and intrafractional prostate positioning errors during tomotherapy. Comparison of gold marker-based registrations with native MVCT

    Energy Technology Data Exchange (ETDEWEB)

    Wust, Peter; Joswig, Marc; Graf, Reinhold; Boehmer, Dirk; Beck, Marcus; Barelkowski, Thomasz; Budach, Volker; Ghadjar, Pirus [Charite Universitaetsmedizin Berlin, Department of Radiation Oncology and Radiotherapy, Berlin (Germany)

    2017-09-15

    For high-dose radiation therapy (RT) of prostate cancer, image-guided (IGRT) and intensity-modulated RT (IMRT) approaches are standard. Less is known regarding comparisons of different IGRT techniques and the resulting residual errors, as well as regarding their influences on dose distributions. A total of 58 patients who received tomotherapy-based RT up to 84 Gy for high-risk prostate cancer underwent IGRT based either on daily megavoltage CT (MVCT) alone (n = 43) or the additional use of gold markers (n = 15) under routine conditions. Planned Adaptive (Accuray Inc., Madison, WI, USA) software was used for elaborated offline analysis to quantify residual interfractional prostate positioning errors, along with systematic and random errors and the resulting safety margins after both IGRT approaches. Dosimetric parameters for clinical target volume (CTV) coverage and exposition of organs at risk (OAR) were also analyzed and compared. Interfractional as well as intrafractional displacements were determined. Particularly in the vertical direction, residual interfractional positioning errors were reduced using the gold marker-based approach, but dosimetric differences were moderate and the clinical relevance relatively small. Intrafractional prostate motion proved to be quite high, with displacements of 1-3 mm; however, these did not result in additional dosimetric impairments. Residual interfractional positioning errors were reduced using gold marker-based IGRT; however, this resulted in only slightly different final dose distributions. Therefore, daily MVCT-based IGRT without markers might be a valid alternative. (orig.) [German] Bei der hochdosierten Bestrahlung des Prostatakarzinoms sind die bildgesteuerte (IGRT) und die intensitaetsmodulierte Bestrahlung (IMRT) Standard. Offene Fragen gibt es beim Vergleich von IGRT-Techniken im Hinblick auf residuelle Fehler und Beeinflussungen der Dosisverteilung. Bei 58 Patienten, deren Hochrisiko-Prostatakarzinom am

  14. Notes on human error analysis and prediction

    International Nuclear Information System (INIS)

    Rasmussen, J.

    1978-11-01

    The notes comprise an introductory discussion of the role of human error analysis and prediction in industrial risk analysis. Following this introduction, different classes of human errors and role in industrial systems are mentioned. Problems related to the prediction of human behaviour in reliability and safety analysis are formulated and ''criteria for analyzability'' which must be met by industrial systems so that a systematic analysis can be performed are suggested. The appendices contain illustrative case stories and a review of human error reports for the task of equipment calibration and testing as found in the US Licensee Event Reports. (author)

  15. Theory of Test Translation Error

    Science.gov (United States)

    Solano-Flores, Guillermo; Backhoff, Eduardo; Contreras-Nino, Luis Angel

    2009-01-01

    In this article, we present a theory of test translation whose intent is to provide the conceptual foundation for effective, systematic work in the process of test translation and test translation review. According to the theory, translation error is multidimensional; it is not simply the consequence of defective translation but an inevitable fact…

  16. Drought Persistence Errors in Global Climate Models

    Science.gov (United States)

    Moon, H.; Gudmundsson, L.; Seneviratne, S. I.

    2018-04-01

    The persistence of drought events largely determines the severity of socioeconomic and ecological impacts, but the capability of current global climate models (GCMs) to simulate such events is subject to large uncertainties. In this study, the representation of drought persistence in GCMs is assessed by comparing state-of-the-art GCM model simulations to observation-based data sets. For doing so, we consider dry-to-dry transition probabilities at monthly and annual scales as estimates for drought persistence, where a dry status is defined as negative precipitation anomaly. Though there is a substantial spread in the drought persistence bias, most of the simulations show systematic underestimation of drought persistence at global scale. Subsequently, we analyzed to which degree (i) inaccurate observations, (ii) differences among models, (iii) internal climate variability, and (iv) uncertainty of the employed statistical methods contribute to the spread in drought persistence errors using an analysis of variance approach. The results show that at monthly scale, model uncertainty and observational uncertainty dominate, while the contribution from internal variability is small in most cases. At annual scale, the spread of the drought persistence error is dominated by the statistical estimation error of drought persistence, indicating that the partitioning of the error is impaired by the limited number of considered time steps. These findings reveal systematic errors in the representation of drought persistence in current GCMs and suggest directions for further model improvement.

  17. Error Analysis of Satellite Precipitation-Driven Modeling of Flood Events in Complex Alpine Terrain

    Directory of Open Access Journals (Sweden)

    Yiwen Mei

    2016-03-01

    Full Text Available The error in satellite precipitation-driven complex terrain flood simulations is characterized in this study for eight different global satellite products and 128 flood events over the Eastern Italian Alps. The flood events are grouped according to two flood types: rain floods and flash floods. The satellite precipitation products and runoff simulations are evaluated based on systematic and random error metrics applied on the matched event pairs and basin-scale event properties (i.e., rainfall and runoff cumulative depth and time series shape. Overall, error characteristics exhibit dependency on the flood type. Generally, timing of the event precipitation mass center and dispersion of the time series derived from satellite precipitation exhibits good agreement with the reference; the cumulative depth is mostly underestimated. The study shows a dampening effect in both systematic and random error components of the satellite-driven hydrograph relative to the satellite-retrieved hyetograph. The systematic error in shape of the time series shows a significant dampening effect. The random error dampening effect is less pronounced for the flash flood events and the rain flood events with a high runoff coefficient. This event-based analysis of the satellite precipitation error propagation in flood modeling sheds light on the application of satellite precipitation in mountain flood hydrology.

  18. Towards a systematic assessment of errors in diffusion Monte Carlo calculations of semiconductors: Case study of zinc selenide and zinc oxide

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Jaehyung [Department of Mechanical Science and Engineering, 1206 W Green Street, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States); Wagner, Lucas K. [Department of Physics, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States); Ertekin, Elif, E-mail: ertekin@illinois.edu [Department of Mechanical Science and Engineering, 1206 W Green Street, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States); International Institute for Carbon Neutral Energy Research - WPI-I" 2CNER, Kyushu University, 744 Moto-oka, Nishi-ku, Fukuoka 819-0395 (Japan)

    2015-12-14

    The fixed node diffusion Monte Carlo (DMC) method has attracted interest in recent years as a way to calculate properties of solid materials with high accuracy. However, the framework for the calculation of properties such as total energies, atomization energies, and excited state energies is not yet fully established. Several outstanding questions remain as to the effect of pseudopotentials, the magnitude of the fixed node error, and the size of supercell finite size effects. Here, we consider in detail the semiconductors ZnSe and ZnO and carry out systematic studies to assess the magnitude of the energy differences arising from controlled and uncontrolled approximations in DMC. The former include time step errors and supercell finite size effects for ground and optically excited states, and the latter include pseudopotentials, the pseudopotential localization approximation, and the fixed node approximation. We find that for these compounds, the errors can be controlled to good precision using modern computational resources and that quantum Monte Carlo calculations using Dirac-Fock pseudopotentials can offer good estimates of both cohesive energy and the gap of these systems. We do however observe differences in calculated optical gaps that arise when different pseudopotentials are used.

  19. The use of adaptive radiation therapy to reduce setup error: a prospective clinical study

    International Nuclear Information System (INIS)

    Yan Di; Wong, John; Vicini, Frank; Robertson, John; Horwitz, Eric; Brabbins, Donald; Cook, Carla; Gustafson, Gary; Stromberg, Jannifer; Martinez, Alvaro

    1996-01-01

    Purpose: Adaptive Radiation Therapy (ART) is a closed-loop feedback process where each patients treatment is adaptively optimized according to the individual variation information measured during the course of treatment. The process aims to maximize the benefits of treatment for the individual patient. A prospective study is currently being conducted to test the feasibility and effectiveness of ART for clinical use. The present study is limited to compensating the effects of systematic setup error. Methods and Materials: The study includes 20 patients treated on a linear accelerator equipped with a computer controlled multileaf collimator (MLC) and a electronic portal imaging device (EPID). Alpha cradles are used to immobilize those patients treated for disease in the thoracic and abdominal regions, and thermal plastic masks for the head and neck. Portal images are acquired daily. Setup error of each treatment field is quantified off-line every day. As determined from an earlier retrospective study of different clinical sites, the measured setup variation from the first 4 to 9 days, are used to estimate systematic setup error and the standard deviation of random setup error for each field. Setup adjustment is made if estimated systematic setup error of the treatment field was larger than or equal to 2 mm. Instead of the conventional approach of repositioning the patient, setup correction is implemented by reshaping MLC to compensate for the estimated systematic error. The entire process from analysis of portal images to the implementation of the modified MLC field is performed via computer network. Systematic and random setup errors of the treatment after adjustment are compared with those prior to adjustment. Finally, the frequency distributions of block overlap cumulated throughout the treatment course are evaluated. Results: Sixty-seven percent of all treatment fields were reshaped to compensate for the estimated systematic errors. At the time of this writing

  20. Analysis of errors in forensic science

    Directory of Open Access Journals (Sweden)

    Mingxiao Du

    2017-01-01

    Full Text Available Reliability of expert testimony is one of the foundations of judicial justice. Both expert bias and scientific errors affect the reliability of expert opinion, which in turn affects the trustworthiness of the findings of fact in legal proceedings. Expert bias can be eliminated by replacing experts; however, it may be more difficult to eliminate scientific errors. From the perspective of statistics, errors in operation of forensic science include systematic errors, random errors, and gross errors. In general, process repetition and abiding by the standard ISO/IEC:17025: 2005, general requirements for the competence of testing and calibration laboratories, during operation are common measures used to reduce errors that originate from experts and equipment, respectively. For example, to reduce gross errors, the laboratory can ensure that a test is repeated several times by different experts. In applying for forensic principles and methods, the Federal Rules of Evidence 702 mandate that judges consider factors such as peer review, to ensure the reliability of the expert testimony. As the scientific principles and methods may not undergo professional review by specialists in a certain field, peer review serves as an exclusive standard. This study also examines two types of statistical errors. As false-positive errors involve a higher possibility of an unfair decision-making, they should receive more attention than false-negative errors.

  1. Effect of MLC leaf position, collimator rotation angle, and gantry rotation angle errors on intensity-modulated radiotherapy plans for nasopharyngeal carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Bai, Sen; Li, Guangjun; Wang, Maojie; Jiang, Qinfeng; Zhang, Yingjie [State Key Laboratory of Biotherapy and Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan (China); Wei, Yuquan, E-mail: yuquawei@vip.sina.com [State Key Laboratory of Biotherapy and Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan (China)

    2013-07-01

    The purpose of this study was to investigate the effect of multileaf collimator (MLC) leaf position, collimator rotation angle, and accelerator gantry rotation angle errors on intensity-modulated radiotherapy plans for nasopharyngeal carcinoma. To compare dosimetric differences between the simulating plans and the clinical plans with evaluation parameters, 6 patients with nasopharyngeal carcinoma were selected for simulation of systematic and random MLC leaf position errors, collimator rotation angle errors, and accelerator gantry rotation angle errors. There was a high sensitivity to dose distribution for systematic MLC leaf position errors in response to field size. When the systematic MLC position errors were 0.5, 1, and 2 mm, respectively, the maximum values of the mean dose deviation, observed in parotid glands, were 4.63%, 8.69%, and 18.32%, respectively. The dosimetric effect was comparatively small for systematic MLC shift errors. For random MLC errors up to 2 mm and collimator and gantry rotation angle errors up to 0.5°, the dosimetric effect was negligible. We suggest that quality control be regularly conducted for MLC leaves, so as to ensure that systematic MLC leaf position errors are within 0.5 mm. Because the dosimetric effect of 0.5° collimator and gantry rotation angle errors is negligible, it can be concluded that setting a proper threshold for allowed errors of collimator and gantry rotation angle may increase treatment efficacy and reduce treatment time.

  2. Error-finding and error-correcting methods for the start-up of the SLC

    International Nuclear Information System (INIS)

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.; Selig, L.J.

    1987-02-01

    During the commissioning of an accelerator, storage ring, or beam transfer line, one of the important tasks of an accelertor physicist is to check the first-order optics of the beam line and to look for errors in the system. Conceptually, it is important to distinguish between techniques for finding the machine errors that are the cause of the problem and techniques for correcting the beam errors that are the result of the machine errors. In this paper we will limit our presentation to certain applications of these two methods for finding or correcting beam-focus errors and beam-kick errors that affect the profile and trajectory of the beam respectively. Many of these methods have been used successfully in the commissioning of SLC systems. In order not to waste expensive beam time we have developed and used a beam-line simulator to test the ideas that have not been tested experimentally. To save valuable physicist's time we have further automated the beam-kick error-finding procedures by adopting methods from the field of artificial intelligence to develop a prototype expert system. Our experience with this prototype has demonstrated the usefulness of expert systems in solving accelerator control problems. The expert system is able to find the same solutions as an expert physicist but in a more systematic fashion. The methods used in these procedures and some of the recent applications will be described in this paper

  3. A residual Monte Carlo method for discrete thermal radiative diffusion

    International Nuclear Information System (INIS)

    Evans, T.M.; Urbatsch, T.J.; Lichtenstein, H.; Morel, J.E.

    2003-01-01

    Residual Monte Carlo methods reduce statistical error at a rate of exp(-bN), where b is a positive constant and N is the number of particle histories. Contrast this convergence rate with 1/√N, which is the rate of statistical error reduction for conventional Monte Carlo methods. Thus, residual Monte Carlo methods hold great promise for increased efficiency relative to conventional Monte Carlo methods. Previous research has shown that the application of residual Monte Carlo methods to the solution of continuum equations, such as the radiation transport equation, is problematic for all but the simplest of cases. However, the residual method readily applies to discrete systems as long as those systems are monotone, i.e., they produce positive solutions given positive sources. We develop a residual Monte Carlo method for solving a discrete 1D non-linear thermal radiative equilibrium diffusion equation, and we compare its performance with that of the discrete conventional Monte Carlo method upon which it is based. We find that the residual method provides efficiency gains of many orders of magnitude. Part of the residual gain is due to the fact that we begin each timestep with an initial guess equal to the solution from the previous timestep. Moreover, fully consistent non-linear solutions can be obtained in a reasonable amount of time because of the effective lack of statistical noise. We conclude that the residual approach has great potential and that further research into such methods should be pursued for more general discrete and continuum systems

  4. Impact of MLC leaf position errors on simple and complex IMRT plans for head and neck cancer

    International Nuclear Information System (INIS)

    Mu, G; Ludlum, E; Xia, P

    2008-01-01

    The dosimetric impact of random and systematic multi-leaf collimator (MLC) leaf position errors is relatively unknown for head and neck intensity-modulated radiotherapy (IMRT) patients. In this report we studied 17 head and neck IMRT patients, including 12 treated with simple plans ( 100 segments). Random errors (-2 to +2 mm) and systematic errors (±0.5 mm and ±1 mm) in MLC leaf positions were introduced into the clinical plans and the resultant dose distributions were analyzed based on defined endpoint doses. The dosimetric effect was insignificant for random MLC leaf position errors up to 2 mm for both simple and complex plans. However, for systematic MLC leaf position errors, we found significant dosimetric differences between the simple and complex IMRT plans. For 1 mm systematic error, the average changes in D 95% were 4% in simple plans versus 8% in complex plans. The average changes in D 0.1cc of the spinal cord and brain stem were 4% in simple plans versus 12% in complex plans. The average changes in parotid glands were 9% in simple plans versus 13% for the complex plans. Overall, simple IMRT plans are less sensitive to leaf position errors than complex IMRT plans

  5. Residual-driven online generalized multiscale finite element methods

    KAUST Repository

    Chung, Eric T.

    2015-09-08

    The construction of local reduced-order models via multiscale basis functions has been an area of active research. In this paper, we propose online multiscale basis functions which are constructed using the offline space and the current residual. Online multiscale basis functions are constructed adaptively in some selected regions based on our error indicators. We derive an error estimator which shows that one needs to have an offline space with certain properties to guarantee that additional online multiscale basis function will decrease the error. This error decrease is independent of physical parameters, such as the contrast and multiple scales in the problem. The offline spaces are constructed using Generalized Multiscale Finite Element Methods (GMsFEM). We show that if one chooses a sufficient number of offline basis functions, one can guarantee that additional online multiscale basis functions will reduce the error independent of contrast. We note that the construction of online basis functions is motivated by the fact that the offline space construction does not take into account distant effects. Using the residual information, we can incorporate the distant information provided the offline approximation satisfies certain properties. In the paper, theoretical and numerical results are presented. Our numerical results show that if the offline space is sufficiently large (in terms of the dimension) such that the coarse space contains all multiscale spectral basis functions that correspond to small eigenvalues, then the error reduction by adding online multiscale basis function is independent of the contrast. We discuss various ways computing online multiscale basis functions which include a use of small dimensional offline spaces.

  6. The Acquisition of Subject-Verb Agreement in Written French: From Novices to Experts' Errors.

    Science.gov (United States)

    Fayol, Michel; Largy, Pierre; Hupet, Michel

    1999-01-01

    Aims at demonstrating the gradual automatization of subject-verb agreement operation in young writers by examining developmental changes in the occurrence of agreement errors. Finds that subjects' performance moved from systematic errors to attraction errors through an intermediate phase. Concludes that attraction errors are a byproduct of the…

  7. Implication of spot position error on plan quality and patient safety in pencil-beam-scanning proton therapy

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Juan; Beltran, Chris J., E-mail: beltran.chris@mayo.edu; Herman, Michael G. [Division of Medical Physics, Department of Radiation Oncology, Mayo Clinic, Rochester, Minnesota 55905 (United States)

    2014-08-15

    Purpose: To quantitatively and systematically assess dosimetric effects induced by spot positioning error as a function of spot spacing (SS) on intensity-modulated proton therapy (IMPT) plan quality and to facilitate evaluation of safety tolerance limits on spot position. Methods: Spot position errors (PE) ranging from 1 to 2 mm were simulated. Simple plans were created on a water phantom, and IMPT plans were calculated on two pediatric patients with a brain tumor of 28 and 3 cc, respectively, using a commercial planning system. For the phantom, a uniform dose was delivered to targets located at different depths from 10 to 20 cm with various field sizes from 2{sup 2} to 15{sup 2} cm{sup 2}. Two nominal spot sizes, 4.0 and 6.6 mm of 1 σ in water at isocenter, were used for treatment planning. The SS ranged from 0.5 σ to 1.5 σ, which is 2–6 mm for the small spot size and 3.3–9.9 mm for the large spot size. Various perturbation scenarios of a single spot error and systematic and random multiple spot errors were studied. To quantify the dosimetric effects, percent dose error (PDE) depth profiles and the value of percent dose error at the maximum dose difference (PDE [ΔDmax]) were used for evaluation. Results: A pair of hot and cold spots was created per spot shift. PDE[ΔDmax] is found to be a complex function of PE, SS, spot size, depth, and global spot distribution that can be well defined in simple models. For volumetric targets, the PDE [ΔDmax] is not noticeably affected by the change of field size or target volume within the studied ranges. In general, reducing SS decreased the dose error. For the facility studied, given a single spot error with a PE of 1.2 mm and for both spot sizes, a SS of 1σ resulted in a 2% maximum dose error; a SS larger than 1.25 σ substantially increased the dose error and its sensitivity to PE. A similar trend was observed in multiple spot errors (both systematic and random errors). Systematic PE can lead to noticeable hot

  8. Planck 2013 results. III. LFI systematic uncertainties

    CERN Document Server

    Aghanim, N; Arnaud, M; Ashdown, M; Atrio-Barandela, F; Aumont, J; Baccigalupi, C; Banday, A J; Barreiro, R B; Battaner, E; Benabed, K; Benoît, A; Benoit-Lévy, A; Bernard, J -P; Bersanelli, M; Bielewicz, P; Bobin, J; Bock, J J; Bonaldi, A; Bonavera, L; Bond, J R; Borrill, J; Bouchet, F R; Bridges, M; Bucher, M; Burigana, C; Butler, R C; Cardoso, J -F; Catalano, A; Chamballu, A; Chiang, L -Y; Christensen, P R; Church, S; Colombi, S; Colombo, L P L; Crill, B P; Cruz, M; Curto, A; Cuttaia, F; Danese, L; Davies, R D; Davis, R J; de Bernardis, P; de Rosa, A; de Zotti, G; Delabrouille, J; Dick, J; Dickinson, C; Diego, J M; Dole, H; Donzelli, S; Doré, O; Douspis, M; Dupac, X; Efstathiou, G; Enßlin, T A; Eriksen, H K; Finelli, F; Forni, O; Frailis, M; Franceschi, E; Gaier, T C; Galeotta, S; Ganga, K; Giard, M; Giraud-Héraud, Y; Gjerløw, E; González-Nuevo, J; Górski, K M; Gratton, S; Gregorio, A; Gruppuso, A; Hansen, F K; Hanson, D; Harrison, D; Henrot-Versillé, S; Hernández-Monteagudo, C; Herranz, D; Hildebrandt, S R; Hivon, E; Hobson, M; Holmes, W A; Hornstrup, A; Hovest, W; Huffenberger, K M; Jaffe, T R; Jaffe, A H; Jewell, J; Jones, W C; Juvela, M; Kangaslahti, P; Keihänen, E; Keskitalo, R; Kiiveri, K; Kisner, T S; Knoche, J; Knox, L; Kunz, M; Kurki-Suonio, H; Lagache, G; Lähteenmäki, A; Lamarre, J -M; Lasenby, A; Laureijs, R J; Lawrence, C R; Leahy, J P; Leonardi, R; Lesgourgues, J; Liguori, M; Lilje, P B; Lindholm, V; Linden-Vørnle, M; López-Caniego, M; Lubin, P M; Macías-Pérez, J F; Maino, D; Mandolesi, N; Maris, M; Marshall, D J; Martin, P G; Martínez-González, E; Masi, S; Matarrese, S; Matthai, F; Mazzotta, P; Meinhold, P R; Melchiorri, A; Mendes, L; Mennella, A; Migliaccio, M; Mitra, S; Moneti, A; Montier, L; Morgante, G; Mortlock, D; Moss, A; Munshi, D; Naselsky, P; Natoli, P; Netterfield, C B; Nørgaard-Nielsen, H U; Novikov, D; Novikov, I; O'Dwyer, I J; Osborne, S; Paci, F; Pagano, L; Paladini, R; Paoletti, D; Partridge, B; Pasian, F; Patanchon, G; Pearson, D; Peel, M; Perdereau, O; Perotto, L; Perrotta, F; Pierpaoli, E; Pietrobon, D; Plaszczynski, S; Platania, P; Pointecouteau, E; Polenta, G; Ponthieu, N; Popa, L; Poutanen, T; Pratt, G W; Prézeau, G; Prunet, S; Puget, J -L; Rachen, J P; Rebolo, R; Reinecke, M; Remazeilles, M; Ricciardi, S; Riller, T; Rocha, G; Rosset, C; Rossetti, M; Roudier, G; Rubiño-Martín, J A; Rusholme, B; Sandri, M; Santos, D; Scott, D; Seiffert, M D; Shellard, E P S; Spencer, L D; Starck, J -L; Stolyarov, V; Stompor, R; Sureau, F; Sutton, D; Suur-Uski, A -S; Sygnet, J -F; Tauber, J A; Tavagnacco, D; Terenzi, L; Toffolatti, L; Tomasi, M; Tristram, M; Tucci, M; Tuovinen, J; Türler, M; Umana, G; Valenziano, L; Valiviita, J; Van Tent, B; Varis, J; Vielva, P; Villa, F; Vittorio, N; Wade, L A; Wandelt, B D; Watson, R; Wilkinson, A; Yvon, D; Zacchei, A; Zonca, A

    2014-01-01

    We present the current estimate of instrumental and systematic effect uncertainties for the Planck-Low Frequency Instrument relevant to the first release of the Planck cosmological results. We give an overview of the main effects and of the tools and methods applied to assess residuals in maps and power spectra. We also present an overall budget of known systematic effect uncertainties, which are dominated sidelobe straylight pick-up and imperfect calibration. However, even these two effects are at least two orders of magnitude weaker than the cosmic microwave background (CMB) fluctuations as measured in terms of the angular temperature power spectrum. A residual signal above the noise level is present in the multipole range $\\ell<20$, most notably at 30 GHz, and is likely caused by residual Galactic straylight contamination. Current analysis aims to further reduce the level of spurious signals in the data and to improve the systematic effects modelling, in particular with respect to straylight and calibra...

  9. Error Analysis of Indirect Broadband Monitoring of Multilayer Optical Coatings using Computer Simulations

    Science.gov (United States)

    Semenov, Z. V.; Labusov, V. A.

    2017-11-01

    Results of studying the errors of indirect monitoring by means of computer simulations are reported. The monitoring method is based on measuring spectra of reflection from additional monitoring substrates in a wide spectral range. Special software (Deposition Control Simulator) is developed, which allows one to estimate the influence of the monitoring system parameters (noise of the photodetector array, operating spectral range of the spectrometer and errors of its calibration in terms of wavelengths, drift of the radiation source intensity, and errors in the refractive index of deposited materials) on the random and systematic errors of deposited layer thickness measurements. The direct and inverse problems of multilayer coatings are solved using the OptiReOpt library. Curves of the random and systematic errors of measurements of the deposited layer thickness as functions of the layer thickness are presented for various values of the system parameters. Recommendations are given on using the indirect monitoring method for the purpose of reducing the layer thickness measurement error.

  10. FDG-PET, CT, MRI for diagnosis of local residual or recurrent nasopharyngeal carcinoma, which one is the best? A systematic review

    International Nuclear Information System (INIS)

    Liu Tao; Xu Wen; Yan Weili; Ye Ming; Bai Yongrui; Huang Gang

    2007-01-01

    Purpose: To perform a systematic review to compare FDG-PET, CT, and MRI imaging for diagnosis of local residual or recurrent nasopharyngeal carcinoma. Materials and methods: MEDLINE, EMBASE, the CBMdisc databases and some other databases were searched for relevant original articles published from January 1990 to June 2007. Inclusion criteria were as follows: Articles were reported in English or Chinese; FDG-PET, CT, or MRI was used to detect local residual or recurrent nasopharyngeal carcinoma; histopathologic analysis and/or close clinical and imaging follow-up for at least 6 months were the reference standard. Two reviewers independently extracted data. A software called 'Meta-DiSc' was used to obtain pooled estimates of sensitivity, specificity, diagnostic odds ratio (DOR), summary receiver operating characteristic (SROC) curves, and the Q* index. Results: Twenty-one articles fulfilled all inclusion criteria. The pooled sensitivity estimates for PET (95%) were significantly higher than CT (76%) (P 0.05). Conclusion: FDG-PET was the best modality for diagnosis of local residual or recurrent nasopharyngeal carcinoma. The type of analysis for PET imaging and the section thickness for CT would affect the diagnostic results. Dual-section helical and multi-section helical CT were better than nonhelical and single-section helical CT

  11. Random and systematic errors in case–control studies calculating the injury risk of driving under the influence of psychoactive substances

    DEFF Research Database (Denmark)

    Houwing, Sjoerd; Hagenzieker, Marjan; Mathijssen, René P.M.

    2013-01-01

    Between 2006 and 2010, six population based case-control studies were conducted as part of the European research-project DRUID (DRiving Under the Influence of Drugs, alcohol and medicines). The aim of these case-control studies was to calculate odds ratios indicating the relative risk of serious....... The list of indicators that was identified in this study is useful both as guidance for systematic reviews and meta-analyses and for future epidemiological studies in the field of driving under the influence to minimize sources of errors already at the start of the study. © 2013 Published by Elsevier Ltd....

  12. Joint position sense error in people with neck pain: A systematic review.

    Science.gov (United States)

    de Vries, J; Ischebeck, B K; Voogt, L P; van der Geest, J N; Janssen, M; Frens, M A; Kleinrensink, G J

    2015-12-01

    Several studies in recent decades have examined the relationship between proprioceptive deficits and neck pain. However, there is no uniform conclusion on the relationship between the two. Clinically, proprioception is evaluated using the Joint Position Sense Error (JPSE), which reflects a person's ability to accurately return his head to a predefined target after a cervical movement. We focused to differentiate between JPSE in people with neck pain compared to healthy controls. Systematic review according to the PRISMA guidelines. Our data sources were Embase, Medline OvidSP, Web of Science, Cochrane Central, CINAHL and Pubmed Publisher. To be included, studies had to compare JPSE of the neck (O) in people with neck pain (P) with JPSE of the neck in healthy controls (C). Fourteen studies were included. Four studies reported that participants with traumatic neck pain had a significantly higher JPSE than healthy controls. Of the eight studies involving people with non-traumatic neck pain, four reported significant differences between the groups. The JPSE did not vary between neck-pain groups. Current literature shows the JPSE to be a relevant measure when it is used correctly. All studies which calculated the JPSE over at least six trials showed a significantly increased JPSE in the neck pain group. This strongly suggests that 'number of repetitions' is a major element in correctly performing the JPSE test. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Unrealized potential and residual consequences of electronic prescribing on pharmacy workflow in the outpatient pharmacy.

    Science.gov (United States)

    Nanji, Karen C; Rothschild, Jeffrey M; Boehne, Jennifer J; Keohane, Carol A; Ash, Joan S; Poon, Eric G

    2014-01-01

    Electronic prescribing systems have often been promoted as a tool for reducing medication errors and adverse drug events. Recent evidence has revealed that adoption of electronic prescribing systems can lead to unintended consequences such as the introduction of new errors. The purpose of this study is to identify and characterize the unrealized potential and residual consequences of electronic prescribing on pharmacy workflow in an outpatient pharmacy. A multidisciplinary team conducted direct observations of workflow in an independent pharmacy and semi-structured interviews with pharmacy staff members about their perceptions of the unrealized potential and residual consequences of electronic prescribing systems. We used qualitative methods to iteratively analyze text data using a grounded theory approach, and derive a list of major themes and subthemes related to the unrealized potential and residual consequences of electronic prescribing. We identified the following five themes: Communication, workflow disruption, cost, technology, and opportunity for new errors. These contained 26 unique subthemes representing different facets of our observations and the pharmacy staff's perceptions of the unrealized potential and residual consequences of electronic prescribing. We offer targeted solutions to improve electronic prescribing systems by addressing the unrealized potential and residual consequences that we identified. These recommendations may be applied not only to improve staff perceptions of electronic prescribing systems but also to improve the design and/or selection of these systems in order to optimize communication and workflow within pharmacies while minimizing both cost and the potential for the introduction of new errors.

  14. Local systematic differences in 2MASS positions

    Science.gov (United States)

    Bustos Fierro, I. H.; Calderón, J. H.

    2018-01-01

    We have found that positions in the 2MASS All-sky Catalog of Point Sources show local systematic differences with characteristic length-scales of ˜ 5 to ˜ 8 arcminutes when compared with several catalogs. We have observed that when 2MASS positions are used in the computation of proper motions, the mentioned systematic differences cause systematic errors in the resulting proper motions. We have developed a method to locally rectify 2MASS with respect to UCAC4 in order to diminish the systematic differences between these catalogs. The rectified 2MASS catalog with the proposed method can be regarded as an extension of UCAC4 for astrometry with accuracy ˜ 90 mas in its positions, with negligible systematic errors. Also we show that the use of these rectified positions removes the observed systematic pattern in proper motions derived from original 2MASS positions.

  15. Subroutine library for error estimation of matrix computation (Ver. 1.0)

    International Nuclear Information System (INIS)

    Ichihara, Kiyoshi; Shizawa, Yoshihisa; Kishida, Norio

    1999-03-01

    'Subroutine Library for Error Estimation of Matrix Computation' is a subroutine library which aids the users in obtaining the error ranges of the linear system's solutions or the Hermitian matrices' eigenvalues. This library contains routines for both sequential computers and parallel computers. The subroutines for linear system error estimation calculate norms of residual vectors, matrices's condition numbers, error bounds of solutions and so on. The subroutines for error estimation of Hermitian matrix eigenvalues derive the error ranges of the eigenvalues according to the Korn-Kato's formula. The test matrix generators supply the matrices appeared in the mathematical research, the ones randomly generated and the ones appeared in the application programs. This user's manual contains a brief mathematical background of error analysis on linear algebra and usage of the subroutines. (author)

  16. Geostatistical methods applied to field model residuals

    DEFF Research Database (Denmark)

    Maule, Fox; Mosegaard, K.; Olsen, Nils

    consists of measurement errors and unmodelled signal), and is typically assumed to be uncorrelated and Gaussian distributed. We have applied geostatistical methods to analyse the residuals of the Oersted(09d/04) field model [http://www.dsri.dk/Oersted/Field_models/IGRF_2005_candidates/], which is based...

  17. Satellite Magnetic Residuals Investigated With Geostatistical Methods

    DEFF Research Database (Denmark)

    Fox Maule, Chaterine; Mosegaard, Klaus; Olsen, Nils

    2005-01-01

    (which consists of measurement errors and unmodeled signal), and is typically assumed to be uncorrelated and Gaussian distributed. We have applied geostatistical methods to analyze the residuals of the Oersted (09d/04) field model (www.dsri.dk/Oersted/Field models/IGRF 2005 candidates/), which is based...

  18. Heuristics and Cognitive Error in Medical Imaging.

    Science.gov (United States)

    Itri, Jason N; Patel, Sohil H

    2018-05-01

    The field of cognitive science has provided important insights into mental processes underlying the interpretation of imaging examinations. Despite these insights, diagnostic error remains a major obstacle in the goal to improve quality in radiology. In this article, we describe several types of cognitive bias that lead to diagnostic errors in imaging and discuss approaches to mitigate cognitive biases and diagnostic error. Radiologists rely on heuristic principles to reduce complex tasks of assessing probabilities and predicting values into simpler judgmental operations. These mental shortcuts allow rapid problem solving based on assumptions and past experiences. Heuristics used in the interpretation of imaging studies are generally helpful but can sometimes result in cognitive biases that lead to significant errors. An understanding of the causes of cognitive biases can lead to the development of educational content and systematic improvements that mitigate errors and improve the quality of care provided by radiologists.

  19. Systematic and random erros in lattice parameter determinations

    International Nuclear Information System (INIS)

    Nascimento, E.M.

    1980-01-01

    A new method is proposed for evaluation of diffraction data used in precise determination of lattice parameters. The method is based on separation and systematic erros on the diffraction angles level, where the randon part of erros is independent on the 0 angle. The separation is enable by assumption that the systematic part of erros depends on the 0 angle linearly. In that situation the high precision in lattice parameters determination is related more to reducing the randon errors content that to the presence of unremoved systematic errors. (Author) [pt

  20. A residual-based a posteriori error estimator for single-phase Darcy flow in fractured porous media

    KAUST Repository

    Chen, Huangxin; Sun, Shuyu

    2016-01-01

    for the problem with non-intersecting fractures. The reliability and efficiency of the a posteriori error estimator are established for the error measured in an energy norm. Numerical results verifying the robustness of the proposed a posteriori error estimator

  1. Weld residual stresses near the bimetallic interface in clad RPV steel: A comparison between deep-hole drilling and neutron diffraction data

    Energy Technology Data Exchange (ETDEWEB)

    James, M.N., E-mail: mjames@plymouth.ac.uk [School of Marine Science and Engineering, University of Plymouth, Drake Circus, Plymouth (United Kingdom); Department of Mechanical Engineering, Nelson Mandela Metropolitan University, Port Elizabeth (South Africa); Newby, M.; Doubell, P. [Eskom Holdings SOC Ltd, Lower Germiston Road, Rosherville, Johannesburg (South Africa); Hattingh, D.G. [Department of Mechanical Engineering, Nelson Mandela Metropolitan University, Port Elizabeth (South Africa); Serasli, K.; Smith, D.J. [Department of Mechanical Engineering, University of Bristol, Queen' s Building, University Walk, Bristol (United Kingdom)

    2014-07-01

    Highlights: • Identification of residual stress trends across bimetallic interface in stainless clad RPV. • Comparison between deep hole drilling (DHD – stress components in two directions) and neutron diffraction (ND – stress components in three directions). • Results indicate that both techniques can assess the trends in residual stress across the interface. • Neutron diffraction gives more detailed information on transient residual stress peaks. - Abstract: The inner surface of ferritic steel reactor pressure vessels (RPV) is clad with strip welded austenitic stainless steel primarily to increase the long-term corrosion resistance of the ferritic vessel. The strip welding process used in the cladding operation induces significant residual stresses in the clad layer and in the RPV steel substrate, arising both from the thermal cycle and from the very different thermal and mechanical properties of the austenitic clad layer and the ferritic RPV steel. This work measures residual stresses using the deep hole drilling (DHD) and neutron diffraction (ND) techniques and compares residual stress data obtained by the two methods in a stainless clad coupon of A533B Class 2 steel. The results give confidence that both techniques are capable of assessing the trends in residual stresses, and their magnitudes. Significant differences are that the ND data shows greater values of the tensile stress peaks (∼100 MPa) than the DHD data but has a higher systematic error associated with it. The stress peaks are sharper with the ND technique and also differ in spatial position by around 1 mm compared with the DHD technique.

  2. Weld residual stresses near the bimetallic interface in clad RPV steel: A comparison between deep-hole drilling and neutron diffraction data

    International Nuclear Information System (INIS)

    James, M.N.; Newby, M.; Doubell, P.; Hattingh, D.G.; Serasli, K.; Smith, D.J.

    2014-01-01

    Highlights: • Identification of residual stress trends across bimetallic interface in stainless clad RPV. • Comparison between deep hole drilling (DHD – stress components in two directions) and neutron diffraction (ND – stress components in three directions). • Results indicate that both techniques can assess the trends in residual stress across the interface. • Neutron diffraction gives more detailed information on transient residual stress peaks. - Abstract: The inner surface of ferritic steel reactor pressure vessels (RPV) is clad with strip welded austenitic stainless steel primarily to increase the long-term corrosion resistance of the ferritic vessel. The strip welding process used in the cladding operation induces significant residual stresses in the clad layer and in the RPV steel substrate, arising both from the thermal cycle and from the very different thermal and mechanical properties of the austenitic clad layer and the ferritic RPV steel. This work measures residual stresses using the deep hole drilling (DHD) and neutron diffraction (ND) techniques and compares residual stress data obtained by the two methods in a stainless clad coupon of A533B Class 2 steel. The results give confidence that both techniques are capable of assessing the trends in residual stresses, and their magnitudes. Significant differences are that the ND data shows greater values of the tensile stress peaks (∼100 MPa) than the DHD data but has a higher systematic error associated with it. The stress peaks are sharper with the ND technique and also differ in spatial position by around 1 mm compared with the DHD technique

  3. Fusing metabolomics data sets with heterogeneous measurement errors

    Science.gov (United States)

    Waaijenborg, Sandra; Korobko, Oksana; Willems van Dijk, Ko; Lips, Mirjam; Hankemeier, Thomas; Wilderjans, Tom F.; Smilde, Age K.

    2018-01-01

    Combining different metabolomics platforms can contribute significantly to the discovery of complementary processes expressed under different conditions. However, analysing the fused data might be hampered by the difference in their quality. In metabolomics data, one often observes that measurement errors increase with increasing measurement level and that different platforms have different measurement error variance. In this paper we compare three different approaches to correct for the measurement error heterogeneity, by transformation of the raw data, by weighted filtering before modelling and by a modelling approach using a weighted sum of residuals. For an illustration of these different approaches we analyse data from healthy obese and diabetic obese individuals, obtained from two metabolomics platforms. Concluding, the filtering and modelling approaches that both estimate a model of the measurement error did not outperform the data transformation approaches for this application. This is probably due to the limited difference in measurement error and the fact that estimation of measurement error models is unstable due to the small number of repeats available. A transformation of the data improves the classification of the two groups. PMID:29698490

  4. Error correction and degeneracy in surface codes suffering loss

    International Nuclear Information System (INIS)

    Stace, Thomas M.; Barrett, Sean D.

    2010-01-01

    Many proposals for quantum information processing are subject to detectable loss errors. In this paper, we give a detailed account of recent results in which we showed that topological quantum memories can simultaneously tolerate both loss errors and computational errors, with a graceful tradeoff between the threshold for each. We further discuss a number of subtleties that arise when implementing error correction on topological memories. We particularly focus on the role played by degeneracy in the matching algorithms and present a systematic study of its effects on thresholds. We also discuss some of the implications of degeneracy for estimating phase transition temperatures in the random bond Ising model.

  5. ValidatorDB: database of up-to-date validation results for ligands and non-standard residues from the Protein Data Bank.

    Science.gov (United States)

    Sehnal, David; Svobodová Vařeková, Radka; Pravda, Lukáš; Ionescu, Crina-Maria; Geidl, Stanislav; Horský, Vladimír; Jaiswal, Deepti; Wimmerová, Michaela; Koča, Jaroslav

    2015-01-01

    Following the discovery of serious errors in the structure of biomacromolecules, structure validation has become a key topic of research, especially for ligands and non-standard residues. ValidatorDB (freely available at http://ncbr.muni.cz/ValidatorDB) offers a new step in this direction, in the form of a database of validation results for all ligands and non-standard residues from the Protein Data Bank (all molecules with seven or more heavy atoms). Model molecules from the wwPDB Chemical Component Dictionary are used as reference during validation. ValidatorDB covers the main aspects of validation of annotation, and additionally introduces several useful validation analyses. The most significant is the classification of chirality errors, allowing the user to distinguish between serious issues and minor inconsistencies. Other such analyses are able to report, for example, completely erroneous ligands, alternate conformations or complete identity with the model molecules. All results are systematically classified into categories, and statistical evaluations are performed. In addition to detailed validation reports for each molecule, ValidatorDB provides summaries of the validation results for the entire PDB, for sets of molecules sharing the same annotation (three-letter code) or the same PDB entry, and for user-defined selections of annotations or PDB entries. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  6. Errors in Viking Lander Atmospheric Profiles Discovered Using MOLA Topography

    Science.gov (United States)

    Withers, Paul; Lorenz, R. D.; Neumann, G. A.

    2002-01-01

    Each Viking lander measured a topographic profile during entry. Comparing to MOLA (Mars Orbiter Laser Altimeter), we find a vertical error of 1-2 km in the Viking trajectory. This introduces a systematic error of 10-20% in the Viking densities and pressures at a given altitude. Additional information is contained in the original extended abstract.

  7. Systematic errors in the tables of theoretical total internal conversion coefficients

    International Nuclear Information System (INIS)

    Dragoun, O.; Rysavy, M.

    1992-01-01

    Some of the total internal conversion coefficients presented in widely used tables of Rosel et al (1978 Atom. Data Nucl. Data Tables 21, 291) were found to be erroneous. The errors appear for some low transition energies, all multipolarities, and probably for all elements. The origin of the errors is explained. The subshell conversion coefficients of Rosel et al, where available, agree with our calculations. to within a few percent. (author)

  8. Analysis and reduction of 3D systematic and random setup errors during the simulation and treatment of lung cancer patients with CT-based external beam radiotherapy dose planning.

    NARCIS (Netherlands)

    Boer, H.D. de; Sornsen de Koste, J.R. van; Senan, S.; Visser, A.G.; Heijmen, B.J.M.

    2001-01-01

    PURPOSE: To determine the magnitude of the errors made in (a) the setup of patients with lung cancer on the simulator relative to their intended setup with respect to the planned treatment beams and (b) in the setup of these patients on the treatment unit. To investigate how the systematic component

  9. Radon measurements-discussion of error estimates for selected methods

    International Nuclear Information System (INIS)

    Zhukovsky, Michael; Onischenko, Alexandra; Bastrikov, Vladislav

    2010-01-01

    The main sources of uncertainties for grab sampling, short-term (charcoal canisters) and long term (track detectors) measurements are: systematic bias of reference equipment; random Poisson and non-Poisson errors during calibration; random Poisson and non-Poisson errors during measurements. The origins of non-Poisson random errors during calibration are different for different kinds of instrumental measurements. The main sources of uncertainties for retrospective measurements conducted by surface traps techniques can be divided in two groups: errors of surface 210 Pb ( 210 Po) activity measurements and uncertainties of transfer from 210 Pb surface activity in glass objects to average radon concentration during this object exposure. It's shown that total measurement error of surface trap retrospective technique can be decreased to 35%.

  10. Effect of residual stress on the integrity of a branch connection

    International Nuclear Information System (INIS)

    Law, M.; Kirstein, O.; Luzin, V.

    2012-01-01

    A new connection to an existing gas pipeline was made by hot-tapping, welding directly onto a pressurised pipeline. The welds were not post-weld heat treated, causing significant residual stresses. The critical weld had residual stresses determined by neutron diffraction using ANSTO's residual stress diffractometer, Kowari. The maximum measured residual stress (290 MPa) was 60% of the yield strength. The magnitudes of errors from a number of sources were estimated. An integrity assessment of the welded branch connection was performed with the measured residual stress values and with residual stress distributions from the BS 7910 and API 579 analysis codes. Analysis using estimates of residual stress from API 579 overestimated the critical crack size. Highlights: ► Residual stresses were measured by neutron diffraction in a thick section, non post-weld heat treated ferritic weld. ► There is little published data on these welds. ► The work compares the measured residual stresses with code-based residual stress distributions.

  11. Local blur analysis and phase error correction method for fringe projection profilometry systems.

    Science.gov (United States)

    Rao, Li; Da, Feipeng

    2018-05-20

    We introduce a flexible error correction method for fringe projection profilometry (FPP) systems in the presence of local blur phenomenon. Local blur caused by global light transport such as camera defocus, projector defocus, and subsurface scattering will cause significant systematic errors in FPP systems. Previous methods, which adopt high-frequency patterns to separate the direct and global components, fail when the global light phenomenon occurs locally. In this paper, the influence of local blur on phase quality is thoroughly analyzed, and a concise error correction method is proposed to compensate the phase errors. For defocus phenomenon, this method can be directly applied. With the aid of spatially varying point spread functions and local frontal plane assumption, experiments show that the proposed method can effectively alleviate the system errors and improve the final reconstruction accuracy in various scenes. For a subsurface scattering scenario, if the translucent object is dominated by multiple scattering, the proposed method can also be applied to correct systematic errors once the bidirectional scattering-surface reflectance distribution function of the object material is measured.

  12. Measuring nuclear-spin-dependent parity violation with molecules: Experimental methods and analysis of systematic errors

    Science.gov (United States)

    Altuntaş, Emine; Ammon, Jeffrey; Cahn, Sidney B.; DeMille, David

    2018-04-01

    Nuclear-spin-dependent parity violation (NSD-PV) effects in atoms and molecules arise from Z0 boson exchange between electrons and the nucleus and from the magnetic interaction between electrons and the parity-violating nuclear anapole moment. It has been proposed to study NSD-PV effects using an enhancement of the observable effect in diatomic molecules [D. DeMille et al., Phys. Rev. Lett. 100, 023003 (2008), 10.1103/PhysRevLett.100.023003]. Here we demonstrate highly sensitive measurements of this type, using the test system 138Ba19F. We show that systematic errors associated with our technique can be suppressed to at least the level of the present statistical sensitivity. With ˜170 h of data, we measure the matrix element W of the NSD-PV interaction with uncertainty δ W /(2 π )<0.7 Hz for each of two configurations where W must have different signs. This sensitivity would be sufficient to measure NSD-PV effects of the size anticipated across a wide range of nuclei.

  13. Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers

    Directory of Open Access Journals (Sweden)

    Zheng You

    2013-04-01

    Full Text Available The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers.

  14. Optical system error analysis and calibration method of high-accuracy star trackers.

    Science.gov (United States)

    Sun, Ting; Xing, Fei; You, Zheng

    2013-04-08

    The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers.

  15. Comparing Absolute Error with Squared Error for Evaluating Empirical Models of Continuous Variables: Compositions, Implications, and Consequences

    Science.gov (United States)

    Gao, J.

    2014-12-01

    Reducing modeling error is often a major concern of empirical geophysical models. However, modeling errors can be defined in different ways: When the response variable is continuous, the most commonly used metrics are squared (SQ) and absolute (ABS) errors. For most applications, ABS error is the more natural, but SQ error is mathematically more tractable, so is often used as a substitute with little scientific justification. Existing literature has not thoroughly investigated the implications of using SQ error in place of ABS error, especially not geospatially. This study compares the two metrics through the lens of bias-variance decomposition (BVD). BVD breaks down the expected modeling error of each model evaluation point into bias (systematic error), variance (model sensitivity), and noise (observation instability). It offers a way to probe the composition of various error metrics. I analytically derived the BVD of ABS error and compared it with the well-known SQ error BVD, and found that not only the two metrics measure the characteristics of the probability distributions of modeling errors differently, but also the effects of these characteristics on the overall expected error are different. Most notably, under SQ error all bias, variance, and noise increase expected error, while under ABS error certain parts of the error components reduce expected error. Since manipulating these subtractive terms is a legitimate way to reduce expected modeling error, SQ error can never capture the complete story embedded in ABS error. I then empirically compared the two metrics with a supervised remote sensing model for mapping surface imperviousness. Pair-wise spatially-explicit comparison for each error component showed that SQ error overstates all error components in comparison to ABS error, especially variance-related terms. Hence, substituting ABS error with SQ error makes model performance appear worse than it actually is, and the analyst would more likely accept a

  16. Residual sweeping errors in turbulent particle pair diffusion in a Lagrangian diffusion model.

    Science.gov (United States)

    Malik, Nadeem A

    2017-01-01

    Thomson, D. J. & Devenish, B. J. [J. Fluid Mech. 526, 277 (2005)] and others have suggested that sweeping effects make Lagrangian properties in Kinematic Simulations (KS), Fung et al [Fung J. C. H., Hunt J. C. R., Malik N. A. & Perkins R. J. J. Fluid Mech. 236, 281 (1992)], unreliable. However, such a conclusion can only be drawn under the assumption of locality. The major aim here is to quantify the sweeping errors in KS without assuming locality. Through a novel analysis based upon analysing pairs of particle trajectories in a frame of reference moving with the large energy containing scales of motion it is shown that the normalized integrated error [Formula: see text] in the turbulent pair diffusivity (K) due to the sweeping effect decreases with increasing pair separation (σl), such that [Formula: see text] as σl/η → ∞; and [Formula: see text] as σl/η → 0. η is the Kolmogorov turbulence microscale. There is an intermediate range of separations 1 < σl/η < ∞ in which the error [Formula: see text] remains negligible. Simulations using KS shows that in the swept frame of reference, this intermediate range is large covering almost the entire inertial subrange simulated, 1 < σl/η < 105, implying that the deviation from locality observed in KS cannot be atributed to sweeping errors. This is important for pair diffusion theory and modeling. PACS numbers: 47.27.E?, 47.27.Gs, 47.27.jv, 47.27.Ak, 47.27.tb, 47.27.eb, 47.11.-j.

  17. Writing errors by adults and by children

    NARCIS (Netherlands)

    Nes, van F.L.

    1984-01-01

    Writing errors are defined as occasional deviations from a person' s normal handwriting; thus they are different from spelling mistakes. The deviations are systematic in nature to a certain degree and can therefore be quantitatively classified in accordance with (1) type and (2) location in a word.

  18. How are medication errors defined? A systematic literature review of definitions and characteristics

    DEFF Research Database (Denmark)

    Lisby, Marianne; Nielsen, L P; Brock, Birgitte

    2010-01-01

    Multiplicity in terminology has been suggested as a possible explanation for the variation in the prevalence of medication errors. So far, few empirical studies have challenged this assertion. The objective of this review was, therefore, to describe the extent and characteristics of medication er...... error definitions in hospitals and to consider the consequences for measuring the prevalence of medication errors....

  19. A posteriori error estimates for finite volume approximations of elliptic equations on general surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Ju, Lili; Tian, Li; Wang, Desheng

    2008-10-31

    In this paper, we present a residual-based a posteriori error estimate for the finite volume discretization of steady convection– diffusion–reaction equations defined on surfaces in R3, which are often implicitly represented as level sets of smooth functions. Reliability and efficiency of the proposed a posteriori error estimator are rigorously proved. Numerical experiments are also conducted to verify the theoretical results and demonstrate the robustness of the error estimator.

  20. NLO error propagation exercise: statistical results

    International Nuclear Information System (INIS)

    Pack, D.J.; Downing, D.J.

    1985-09-01

    Error propagation is the extrapolation and cumulation of uncertainty (variance) above total amounts of special nuclear material, for example, uranium or 235 U, that are present in a defined location at a given time. The uncertainty results from the inevitable inexactness of individual measurements of weight, uranium concentration, 235 U enrichment, etc. The extrapolated and cumulated uncertainty leads directly to quantified limits of error on inventory differences (LEIDs) for such material. The NLO error propagation exercise was planned as a field demonstration of the utilization of statistical error propagation methodology at the Feed Materials Production Center in Fernald, Ohio from April 1 to July 1, 1983 in a single material balance area formed specially for the exercise. Major elements of the error propagation methodology were: variance approximation by Taylor Series expansion; variance cumulation by uncorrelated primary error sources as suggested by Jaech; random effects ANOVA model estimation of variance effects (systematic error); provision for inclusion of process variance in addition to measurement variance; and exclusion of static material. The methodology was applied to material balance area transactions from the indicated time period through a FORTRAN computer code developed specifically for this purpose on the NLO HP-3000 computer. This paper contains a complete description of the error propagation methodology and a full summary of the numerical results of applying the methodlogy in the field demonstration. The error propagation LEIDs did encompass the actual uranium and 235 U inventory differences. Further, one can see that error propagation actually provides guidance for reducing inventory differences and LEIDs in future time periods

  1. Evaluation of positioning errors of the patient using cone beam CT megavoltage; Evaluacion de errores de posicionamiento del paciente mediante Cone Beam CT de megavoltaje

    Energy Technology Data Exchange (ETDEWEB)

    Garcia Ruiz-Zorrilla, J.; Fernandez Leton, J. P.; Zucca Aparicio, D.; Perez Moreno, J. M.; Minambres Moro, A.

    2013-07-01

    Image-guided radiation therapy allows you to assess and fix the positioning of the patient in the treatment unit, thus reducing the uncertainties due to the positioning of the patient. This work assesses errors systematic and errors of randomness from the corrections made to a series of patients of different diseases through a protocol off line of cone beam CT (CBCT) megavoltage. (Author)

  2. An adjoint-based scheme for eigenvalue error improvement

    International Nuclear Information System (INIS)

    Merton, S.R.; Smedley-Stevenson, R.P.; Pain, C.C.; El-Sheikh, A.H.; Buchan, A.G.

    2011-01-01

    A scheme for improving the accuracy and reducing the error in eigenvalue calculations is presented. Using a rst order Taylor series expansion of both the eigenvalue solution and the residual of the governing equation, an approximation to the error in the eigenvalue is derived. This is done using a convolution of the equation residual and adjoint solution, which is calculated in-line with the primal solution. A defect correction on the solution is then performed in which the approximation to the error is used to apply a correction to the eigenvalue. The method is shown to dramatically improve convergence of the eigenvalue. The equation for the eigenvalue is shown to simplify when certain normalizations are applied to the eigenvector. Two such normalizations are considered; the rst of these is a fission-source type of normalisation and the second is an eigenvector normalisation. Results are demonstrated on a number of demanding elliptic problems using continuous Galerkin weighted nite elements. Moreover, the correction scheme may also be applied to hyperbolic problems and arbitrary discretization. This is not limited to spatial corrections and may be used throughout the phase space of the discrete equation. The applied correction not only improves fidelity of the calculation, it allows assessment of the reliability of numerical schemes to be made and could be used to guide mesh adaption algorithms or to automate mesh generation schemes. (author)

  3. Improving Type Error Messages in OCaml

    Directory of Open Access Journals (Sweden)

    Arthur Charguéraud

    2015-12-01

    Full Text Available Cryptic type error messages are a major obstacle to learning OCaml or other ML-based languages. In many cases, error messages cannot be interpreted without a sufficiently-precise model of the type inference algorithm. The problem of improving type error messages in ML has received quite a bit of attention over the past two decades, and many different strategies have been considered. The challenge is not only to produce error messages that are both sufficiently concise and systematically useful to the programmer, but also to handle a full-blown programming language and to cope with large-sized programs efficiently. In this work, we present a modification to the traditional ML type inference algorithm implemented in OCaml that, by significantly reducing the left-to-right bias, allows us to report error messages that are more helpful to the programmer. Our algorithm remains fully predictable and continues to produce fairly concise error messages that always help making some progress towards fixing the code. We implemented our approach as a patch to the OCaml compiler in just a few hundred lines of code. We believe that this patch should benefit not just to beginners, but also to experienced programs developing large-scale OCaml programs.

  4. Dosimetric Effect of Intrafraction Motion and Residual Setup Error for Hypofractionated Prostate Intensity-Modulated Radiotherapy With Online Cone Beam Computed Tomography Image Guidance

    International Nuclear Information System (INIS)

    Adamson, Justus; Wu Qiuwen; Yan Di

    2011-01-01

    Purpose: To quantify the dosimetric effect and margins required to account for prostate intrafractional translation and residual setup error in a cone beam computed tomography (CBCT)-guided hypofractionated radiotherapy protocol. Methods and Materials: Prostate position after online correction was measured during dose delivery using simultaneous kV fluoroscopy and posttreatment CBCT in 572 fractions to 30 patients. We reconstructed the dose distribution to the clinical tumor volume (CTV) using a convolution of the static dose with a probability density function (PDF) based on the kV fluoroscopy, and we calculated the minimum dose received by 99% of the CTV (D 99 ). We compared reconstructed doses when the convolution was performed per beam, per patient, and when the PDF was created using posttreatment CBCT. We determined the minimum axis-specific margins to limit CTV D 99 reduction to 1%. Results: For 3-mm margins, D 99 reduction was ≤5% for 29/30 patients. Using post-CBCT rather than localizations at treatment delivery exaggerated dosimetric effects by ∼47%, while there was no such bias between the dose convolved with a beam-specific and patient-specific PDF. After eight fractions, final cumulative D 99 could be predicted with a root mean square error of <1%. For 90% of patients, the required margins were ≤2, 4, and 3 mm, with 70%, 40%, and 33% of patients requiring no right-left (RL), anteroposterior (AP), and superoinferior margins, respectively. Conclusions: For protocols with CBCT guidance, RL, AP, and SI margins of 2, 4, and 3 mm are sufficient to account for translational errors; however, the large variation in patient-specific margins suggests that adaptive management may be beneficial.

  5. Dosimetric effect of intrafraction motion and residual setup error for hypofractionated prostate intensity-modulated radiotherapy with online cone beam computed tomography image guidance.

    LENUS (Irish Health Repository)

    Adamson, Justus

    2012-02-01

    PURPOSE: To quantify the dosimetric effect and margins required to account for prostate intrafractional translation and residual setup error in a cone beam computed tomography (CBCT)-guided hypofractionated radiotherapy protocol. METHODS AND MATERIALS: Prostate position after online correction was measured during dose delivery using simultaneous kV fluoroscopy and posttreatment CBCT in 572 fractions to 30 patients. We reconstructed the dose distribution to the clinical tumor volume (CTV) using a convolution of the static dose with a probability density function (PDF) based on the kV fluoroscopy, and we calculated the minimum dose received by 99% of the CTV (D(99)). We compared reconstructed doses when the convolution was performed per beam, per patient, and when the PDF was created using posttreatment CBCT. We determined the minimum axis-specific margins to limit CTV D(99) reduction to 1%. RESULTS: For 3-mm margins, D(99) reduction was <\\/=5% for 29\\/30 patients. Using post-CBCT rather than localizations at treatment delivery exaggerated dosimetric effects by ~47%, while there was no such bias between the dose convolved with a beam-specific and patient-specific PDF. After eight fractions, final cumulative D(99) could be predicted with a root mean square error of <1%. For 90% of patients, the required margins were <\\/=2, 4, and 3 mm, with 70%, 40%, and 33% of patients requiring no right-left (RL), anteroposterior (AP), and superoinferior margins, respectively. CONCLUSIONS: For protocols with CBCT guidance, RL, AP, and SI margins of 2, 4, and 3 mm are sufficient to account for translational errors; however, the large variation in patient-specific margins suggests that adaptive management may be beneficial.

  6. Residual gauge invariance of Hamiltonian lattice gauge theories

    International Nuclear Information System (INIS)

    Ryang, S.; Saito, T.; Shigemoto, K.

    1984-01-01

    The time-independent residual gauge invariance of Hamiltonian lattice gauge theories is considered. Eigenvalues and eigenfunctions of the unperturbed Hamiltonian are found in terms of Gegengauer's polynomials. Physical states which satisfy the subsidiary condition corresponding to Gauss' law are constructed systematically. (orig.)

  7. Global Warming Estimation from MSU: Correction for Drift and Calibration Errors

    Science.gov (United States)

    Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have about 7am/7pm orbital geometry) and afternoon satellites (NOAA 7, 9, 11 and 14 that have about 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error. We find we can decrease the global temperature trend by about 0.07 K/decade. In addition there are systematic time dependent errors present in the data that are introduced by the drift in the satellite orbital geometry arises from the diurnal cycle in temperature which is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observations made in the MSU Ch 1 (50.3 GHz) support this approach. The error is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the errors on the global temperature trend. In one path the

  8. Correcting systematic errors in high-sensitivity deuteron polarization measurements

    NARCIS (Netherlands)

    Brantjes, N. P. M.; Dzordzhadze, V.; Gebel, R.; Gonnella, F.; Gray, F. E.; van der Hoek, D. J.; Imig, A.; Kruithof, W. L.; Lazarus, D. M.; Lehrach, A.; Lorentz, B.; Messi, R.; Moricciani, D.; Morse, W. M.; Noid, G. A.; Onderwater, C. J. G.; Ozben, C. S.; Prasuhn, D.; Sandri, P. Levi; Semertzidis, Y. K.; da Silva e Silva, M.; Stephenson, E. J.; Stockhorst, H.; Venanzoni, G.; Versolato, O. O.

    2012-01-01

    This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY

  9. Identification of residue pairing in interacting β-strands from a predicted residue contact map.

    Science.gov (United States)

    Mao, Wenzhi; Wang, Tong; Zhang, Wenxuan; Gong, Haipeng

    2018-04-19

    Despite the rapid progress of protein residue contact prediction, predicted residue contact maps frequently contain many errors. However, information of residue pairing in β strands could be extracted from a noisy contact map, due to the presence of characteristic contact patterns in β-β interactions. This information may benefit the tertiary structure prediction of mainly β proteins. In this work, we propose a novel ridge-detection-based β-β contact predictor to identify residue pairing in β strands from any predicted residue contact map. Our algorithm RDb 2 C adopts ridge detection, a well-developed technique in computer image processing, to capture consecutive residue contacts, and then utilizes a novel multi-stage random forest framework to integrate the ridge information and additional features for prediction. Starting from the predicted contact map of CCMpred, RDb 2 C remarkably outperforms all state-of-the-art methods on two conventional test sets of β proteins (BetaSheet916 and BetaSheet1452), and achieves F1-scores of ~ 62% and ~ 76% at the residue level and strand level, respectively. Taking the prediction of the more advanced RaptorX-Contact as input, RDb 2 C achieves impressively higher performance, with F1-scores reaching ~ 76% and ~ 86% at the residue level and strand level, respectively. In a test of structural modeling using the top 1 L predicted contacts as constraints, for 61 mainly β proteins, the average TM-score achieves 0.442 when using the raw RaptorX-Contact prediction, but increases to 0.506 when using the improved prediction by RDb 2 C. Our method can significantly improve the prediction of β-β contacts from any predicted residue contact maps. Prediction results of our algorithm could be directly applied to effectively facilitate the practical structure prediction of mainly β proteins. All source data and codes are available at http://166.111.152.91/Downloads.html or the GitHub address of https://github.com/wzmao/RDb2C .

  10. Filtering Methods for Error Reduction in Spacecraft Attitude Estimation Using Quaternion Star Trackers

    Science.gov (United States)

    Calhoun, Philip C.; Sedlak, Joseph E.; Superfin, Emil

    2011-01-01

    Precision attitude determination for recent and planned space missions typically includes quaternion star trackers (ST) and a three-axis inertial reference unit (IRU). Sensor selection is based on estimates of knowledge accuracy attainable from a Kalman filter (KF), which provides the optimal solution for the case of linear dynamics with measurement and process errors characterized by random Gaussian noise with white spectrum. Non-Gaussian systematic errors in quaternion STs are often quite large and have an unpredictable time-varying nature, particularly when used in non-inertial pointing applications. Two filtering methods are proposed to reduce the attitude estimation error resulting from ST systematic errors, 1) extended Kalman filter (EKF) augmented with Markov states, 2) Unscented Kalman filter (UKF) with a periodic measurement model. Realistic assessments of the attitude estimation performance gains are demonstrated with both simulation and flight telemetry data from the Lunar Reconnaissance Orbiter.

  11. SU-E-T-114: Analysis of MLC Errors On Gamma Pass Rates for Patient-Specific and Conventional Phantoms

    Energy Technology Data Exchange (ETDEWEB)

    Sterling, D; Ehler, E [University of Minnesota, Minneapolis, MN (United States)

    2015-06-15

    Purpose: To evaluate whether a 3D patient-specific phantom is better able to detect known MLC errors in a clinically delivered treatment plan than conventional phantoms. 3D printing may make fabrication of such phantoms feasible. Methods: Two types of MLC errors were introduced into a clinically delivered, non-coplanar IMRT, partial brain treatment plan. First, uniformly distributed random errors of up to 3mm, 2mm, and 1mm were introduced into the MLC positions for each field. Second, systematic MLC-bank position errors of 5mm, 3.5mm, and 2mm due to simulated effects of gantry and MLC sag were introduced. The original plan was recalculated with these errors on the original CT dataset as well as cylindrical and planar IMRT QA phantoms. The original dataset was considered to be a perfect 3D patient-specific phantom. The phantoms were considered to be ideal 3D dosimetry systems with no resolution limitations. Results: Passing rates for Gamma Index (3%/3mm and no dose threshold) were calculated on the 3D phantom, cylindrical phantom, and both on a composite and field-by-field basis for the planar phantom. Pass rates for 5mm systematic and 3mm random error were 86.0%, 89.6%, 98% and 98.3% respectively. For 3.5mm systematic and 2mm random error the pass rates were 94.7%, 96.2%, 99.2% and 99.2% respectively. For 2mm systematic error with 1mm random error the pass rates were 99.9%, 100%, 100% and 100% respectively. Conclusion: A 3D phantom with the patient anatomy is able to discern errors, both severe and subtle, that are not seen using conventional phantoms. Therefore, 3D phantoms may be beneficial for commissioning new treatment machines and modalities, patient-specific QA and end-to-end testing.

  12. SU-E-T-114: Analysis of MLC Errors On Gamma Pass Rates for Patient-Specific and Conventional Phantoms

    International Nuclear Information System (INIS)

    Sterling, D; Ehler, E

    2015-01-01

    Purpose: To evaluate whether a 3D patient-specific phantom is better able to detect known MLC errors in a clinically delivered treatment plan than conventional phantoms. 3D printing may make fabrication of such phantoms feasible. Methods: Two types of MLC errors were introduced into a clinically delivered, non-coplanar IMRT, partial brain treatment plan. First, uniformly distributed random errors of up to 3mm, 2mm, and 1mm were introduced into the MLC positions for each field. Second, systematic MLC-bank position errors of 5mm, 3.5mm, and 2mm due to simulated effects of gantry and MLC sag were introduced. The original plan was recalculated with these errors on the original CT dataset as well as cylindrical and planar IMRT QA phantoms. The original dataset was considered to be a perfect 3D patient-specific phantom. The phantoms were considered to be ideal 3D dosimetry systems with no resolution limitations. Results: Passing rates for Gamma Index (3%/3mm and no dose threshold) were calculated on the 3D phantom, cylindrical phantom, and both on a composite and field-by-field basis for the planar phantom. Pass rates for 5mm systematic and 3mm random error were 86.0%, 89.6%, 98% and 98.3% respectively. For 3.5mm systematic and 2mm random error the pass rates were 94.7%, 96.2%, 99.2% and 99.2% respectively. For 2mm systematic error with 1mm random error the pass rates were 99.9%, 100%, 100% and 100% respectively. Conclusion: A 3D phantom with the patient anatomy is able to discern errors, both severe and subtle, that are not seen using conventional phantoms. Therefore, 3D phantoms may be beneficial for commissioning new treatment machines and modalities, patient-specific QA and end-to-end testing

  13. Peranan Konservatisme Akuntansi dan Faktor Risiko Makro dalam Model Laba Residual: Sebuah Studi di Bursa Efek Indonesia

    Directory of Open Access Journals (Sweden)

    Andry Irwanto

    2015-01-01

    Full Text Available This study examines the association of accounting conservatism, growth and macro-economic risk factors and valuation error of residual income model in Indonesian Stock Exchange. We use beta, book to market ratio, and size as proxies for macro-economic risk. Using sample of 186 companies taken from LQ-45 for the year of 2001 – 2005, we find that accounting conservatism and growth, have no significant influence toward residual income model valuation error. B/M has significant influence toward valuation error and has consistent sign as predicted by theory. Beta and Size has no significant influence toward valuation error. Overall, macro-economic risk factors can explain the valuation error better than accounting-based factors.Future research is expected to find accounting variables that can represent macro-economic risk and test their ability to explain valuation error. Also, future research need to confirm the relevance of accounting conservatism in stock valuation after implementation of IFRS in Indonesia.

  14. Towards eliminating systematic errors caused by the experimental conditions in Biochemical Methane Potential (BMP) tests

    Energy Technology Data Exchange (ETDEWEB)

    Strömberg, Sten, E-mail: sten.stromberg@biotek.lu.se [Department of Biotechnology, Lund University, Getingevägen 60, 221 00 Lund (Sweden); Nistor, Mihaela, E-mail: mn@bioprocesscontrol.com [Bioprocess Control, Scheelevägen 22, 223 63 Lund (Sweden); Liu, Jing, E-mail: jing.liu@biotek.lu.se [Department of Biotechnology, Lund University, Getingevägen 60, 221 00 Lund (Sweden); Bioprocess Control, Scheelevägen 22, 223 63 Lund (Sweden)

    2014-11-15

    Highlights: • The evaluated factors introduce significant systematic errors (10–38%) in BMP tests. • Ambient temperature (T) has the most substantial impact (∼10%) at low altitude. • Ambient pressure (p) has the most substantial impact (∼68%) at high altitude. • Continuous monitoring of T and p is not necessary for kinetic calculations. - Abstract: The Biochemical Methane Potential (BMP) test is increasingly recognised as a tool for selecting and pricing biomass material for production of biogas. However, the results for the same substrate often differ between laboratories and much work to standardise such tests is still needed. In the current study, the effects from four environmental factors (i.e. ambient temperature and pressure, water vapour content and initial gas composition of the reactor headspace) on the degradation kinetics and the determined methane potential were evaluated with a 2{sup 4} full factorial design. Four substrates, with different biodegradation profiles, were investigated and the ambient temperature was found to be the most significant contributor to errors in the methane potential. Concerning the kinetics of the process, the environmental factors’ impact on the calculated rate constants was negligible. The impact of the environmental factors on the kinetic parameters and methane potential from performing a BMP test at different geographical locations around the world was simulated by adjusting the data according to the ambient temperature and pressure of some chosen model sites. The largest effect on the methane potential was registered from tests performed at high altitudes due to a low ambient pressure. The results from this study illustrate the importance of considering the environmental factors’ influence on volumetric gas measurement in BMP tests. This is essential to achieve trustworthy and standardised results that can be used by researchers and end users from all over the world.

  15. Towards eliminating systematic errors caused by the experimental conditions in Biochemical Methane Potential (BMP) tests

    International Nuclear Information System (INIS)

    Strömberg, Sten; Nistor, Mihaela; Liu, Jing

    2014-01-01

    Highlights: • The evaluated factors introduce significant systematic errors (10–38%) in BMP tests. • Ambient temperature (T) has the most substantial impact (∼10%) at low altitude. • Ambient pressure (p) has the most substantial impact (∼68%) at high altitude. • Continuous monitoring of T and p is not necessary for kinetic calculations. - Abstract: The Biochemical Methane Potential (BMP) test is increasingly recognised as a tool for selecting and pricing biomass material for production of biogas. However, the results for the same substrate often differ between laboratories and much work to standardise such tests is still needed. In the current study, the effects from four environmental factors (i.e. ambient temperature and pressure, water vapour content and initial gas composition of the reactor headspace) on the degradation kinetics and the determined methane potential were evaluated with a 2 4 full factorial design. Four substrates, with different biodegradation profiles, were investigated and the ambient temperature was found to be the most significant contributor to errors in the methane potential. Concerning the kinetics of the process, the environmental factors’ impact on the calculated rate constants was negligible. The impact of the environmental factors on the kinetic parameters and methane potential from performing a BMP test at different geographical locations around the world was simulated by adjusting the data according to the ambient temperature and pressure of some chosen model sites. The largest effect on the methane potential was registered from tests performed at high altitudes due to a low ambient pressure. The results from this study illustrate the importance of considering the environmental factors’ influence on volumetric gas measurement in BMP tests. This is essential to achieve trustworthy and standardised results that can be used by researchers and end users from all over the world

  16. Tolerable systematic errors in Really Large Hadron Collider dipoles

    International Nuclear Information System (INIS)

    Peggs, S.; Dell, F.

    1996-01-01

    Maximum allowable systematic harmonics for arc dipoles in a Really Large Hadron Collider are derived. The possibility of half cell lengths much greater than 100 meters is justified. A convenient analytical model evaluating horizontal tune shifts is developed, and tested against a sample high field collider

  17. The Residual Setup Errors of Different IGRT Alignment Procedures for Head and Neck IMRT and the Resulting Dosimetric Impact

    International Nuclear Information System (INIS)

    Graff, Pierre; Kirby, Neil; Weinberg, Vivian; Chen, Josephine; Yom, Sue S.; Lambert, Louise; Pouliot, Jean

    2013-01-01

    Purpose: To assess residual setup errors during head and neck radiation therapy and the resulting consequences for the delivered dose for various patient alignment procedures. Methods and Materials: Megavoltage cone beam computed tomography (MVCBCT) scans from 11 head and neck patients who underwent intensity modulated radiation therapy were used to assess setup errors. Each MVCBCT scan was registered to its reference planning kVCT, with seven different alignment procedures: automatic alignment and manual registration to 6 separate bony landmarks (sphenoid, left/right maxillary sinuses, mandible, cervical 1 [C1]-C2, and C7-thoracic 1 [T1] vertebrae). Shifts in the different alignments were compared with each other to determine whether there were any statistically significant differences. Then, the dose distribution was recalculated on 3 MVCBCT images per patient for every alignment procedure. The resulting dose-volume histograms for targets and organs at risk (OARs) were compared to those from the planning kVCTs. Results: The registration procedures produced statistically significant global differences in patient alignment and actual dose distribution, calling for a need for standardization of patient positioning. Vertically, the automatic, sphenoid, and maxillary sinuses alignments mainly generated posterior shifts and resulted in mean increases in maximal dose to OARs of >3% of the planned dose. The suggested choice of C1-C2 as a reference landmark appears valid, combining both OAR sparing and target coverage. Assuming this choice, relevant margins to apply around volumes of interest at the time of planning to take into account for the relative mobility of other regions are discussed. Conclusions: Use of different alignment procedures for treating head and neck patients produced variations in patient setup and dose distribution. With concern for standardizing practice, C1-C2 reference alignment with relevant margins around planning volumes seems to be a valid

  18. Decomposition and correction overlapping peaks of LIBS using an error compensation method combined with curve fitting.

    Science.gov (United States)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-09-01

    The laser induced breakdown spectroscopy (LIBS) technique is an effective method to detect material composition by obtaining the plasma emission spectrum. The overlapping peaks in the spectrum are a fundamental problem in the qualitative and quantitative analysis of LIBS. Based on a curve fitting method, this paper studies an error compensation method to achieve the decomposition and correction of overlapping peaks. The vital step is that the fitting residual is fed back to the overlapping peaks and performs multiple curve fitting processes to obtain a lower residual result. For the quantitative experiments of Cu, the Cu-Fe overlapping peaks in the range of 321-327 nm obtained from the LIBS spectrum of five different concentrations of CuSO 4 ·5H 2 O solution were decomposed and corrected using curve fitting and error compensation methods. Compared with the curve fitting method, the error compensation reduced the fitting residual about 18.12-32.64% and improved the correlation about 0.86-1.82%. Then, the calibration curve between the intensity and concentration of the Cu was established. It can be seen that the error compensation method exhibits a higher linear correlation between the intensity and concentration of Cu, which can be applied to the decomposition and correction of overlapping peaks in the LIBS spectrum.

  19. Haplotype reconstruction error as a classical misclassification problem: introducing sensitivity and specificity as error measures.

    Directory of Open Access Journals (Sweden)

    Claudia Lamina

    Full Text Available BACKGROUND: Statistically reconstructing haplotypes from single nucleotide polymorphism (SNP genotypes, can lead to falsely classified haplotypes. This can be an issue when interpreting haplotype association results or when selecting subjects with certain haplotypes for subsequent functional studies. It was our aim to quantify haplotype reconstruction error and to provide tools for it. METHODS AND RESULTS: By numerous simulation scenarios, we systematically investigated several error measures, including discrepancy, error rate, and R(2, and introduced the sensitivity and specificity to this context. We exemplified several measures in the KORA study, a large population-based study from Southern Germany. We find that the specificity is slightly reduced only for common haplotypes, while the sensitivity was decreased for some, but not all rare haplotypes. The overall error rate was generally increasing with increasing number of loci, increasing minor allele frequency of SNPs, decreasing correlation between the alleles and increasing ambiguity. CONCLUSIONS: We conclude that, with the analytical approach presented here, haplotype-specific error measures can be computed to gain insight into the haplotype uncertainty. This method provides the information, if a specific risk haplotype can be expected to be reconstructed with rather no or high misclassification and thus on the magnitude of expected bias in association estimates. We also illustrate that sensitivity and specificity separate two dimensions of the haplotype reconstruction error, which completely describe the misclassification matrix and thus provide the prerequisite for methods accounting for misclassification.

  20. Analysis of possible systematic errors in the Oslo method

    International Nuclear Information System (INIS)

    Larsen, A. C.; Guttormsen, M.; Buerger, A.; Goergen, A.; Nyhus, H. T.; Rekstad, J.; Siem, S.; Toft, H. K.; Tveten, G. M.; Wikan, K.; Krticka, M.; Betak, E.; Schiller, A.; Voinov, A. V.

    2011-01-01

    In this work, we have reviewed the Oslo method, which enables the simultaneous extraction of the level density and γ-ray transmission coefficient from a set of particle-γ coincidence data. Possible errors and uncertainties have been investigated. Typical data sets from various mass regions as well as simulated data have been tested against the assumptions behind the data analysis.

  1. Interfractional and intrafractional errors assessed by daily cone-beam computed tomography in nasopharyngeal carcinoma treated with intensity-modulated radiation therapy. A prospective study

    International Nuclear Information System (INIS)

    Lu Heming; Lin Hui; Feng Guosheng

    2012-01-01

    This prospective study was to assess interfractional and intrafractional errors and to estimate appropriate margins for planning target volume (PTV) by using daily cone-beam computed tomography (CBCT) guidance in nasopharyngeal carcinoma (NPC). Daily pretreatment and post-treatment CBCT scans were acquired separately after initial patient setup and after the completion of each treatment fraction in 10 patients treated with intensity-modulated radiation therapy (IMRT). Online corrections were made before treatment if any translational setup error was found. Interfractional and intrafractional errors were recorded in the right-left (RL), superior-inferior (SI) and anterior-posterior (AP) directions. For the translational shifts, interfractional errors >2 mm occurred in 21.7% of measurements in the RL direction, 12.7% in the SI direction and 34.1% in the AP direction, respectively. Online correction resulted in 100% of residual errors ≤2 mm in the RL and SI directions, and 95.5% of residual errors ≤2 mm in the AP direction. No residual errors >3 mm occurred in the three directions. For the rotational shifts, a significant reduction was found in the magnitudes of residual errors compared with those of interfractional errors. A margin of 4.9 mm, 4.0 mm and 6.3 mm was required in the RL, SI and AP directions, respectively, when daily CBCT scans were not performed. With daily CBCT, the margins were reduced to 1.2 mm in all directions. In conclusion, daily CBCT guidance is an effective modality to improve the accuracy of IMRT for NPC. The online correction could result in a 70-81% reduction in margin size. (author)

  2. Analysis of error-correction constraints in an optical disk

    Science.gov (United States)

    Roberts, Jonathan D.; Ryley, Alan; Jones, David M.; Burke, David

    1996-07-01

    The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.

  3. Neutron-induced soft errors in CMOS circuits

    International Nuclear Information System (INIS)

    Hazucha, P.

    1999-01-01

    The subject of this thesis is a systematic study of soft errors occurring in CMOS integrated circuits when being exposed to radiation. The vast majority of commercial circuits operate in the natural environment ranging from the sea level to aircraft flight altitudes (less than 20 km), where the errors are caused mainly by interaction of atmospheric neutrons with silicon. Initially, the soft error rate (SER) of a static memory was measured for supply voltages from 2V to 5V when irradiated by 14 MeV and 100 MeV neutrons. Increased error rate due to the decreased supply voltage has been identified as a potential hazard for operation of future low-voltage circuits. A novel methodology was proposed for accurate SER characterization of a manufacturing process and it was validated by measurements on a 0.6 μm process and 100 MeV neutrons. The methodology can be applied to the prediction of SER in the natural environment

  4. Convolution method and CTV-to-PTV margins for finite fractions and small systematic errors

    International Nuclear Information System (INIS)

    Gordon, J J; Siebers, J V

    2007-01-01

    The van Herk margin formula (VHMF) relies on the accuracy of the convolution method (CM) to determine clinical target volume (CTV) to planning target volume (PTV) margins. This work (1) evaluates the accuracy of the CM and VHMF as a function of the number of fractions N and other parameters, and (2) proposes an alternative margin algorithm which ensures target coverage for a wider range of parameter values. Dose coverage was evaluated for a spherical target with uniform margin, using the same simplified dose model and CTV coverage criterion as were used in development of the VHMF. Systematic and random setup errors were assumed to be normally distributed with standard deviations Σ and σ. For clinically relevant combinations of σ, Σ and N, margins were determined by requiring that 90% of treatment course simulations have a CTV minimum dose greater than or equal to the static PTV minimum dose. Simulation results were compared with the VHMF and the alternative margin algorithm. The CM and VHMF were found to be accurate for parameter values satisfying the approximate criterion: σ[1 - γN/25] 0.2, because they failed to account for the non-negligible dose variability associated with random setup errors. These criteria are applicable when σ ∼> σ P , where σ P = 0.32 cm is the standard deviation of the normal dose penumbra. (Qualitative behaviour of the CM and VHMF will remain the same, though the criteria might vary if σ P takes values other than 0.32 cm.) When σ P , dose variability due to random setup errors becomes negligible, and the CM and VHMF are valid regardless of the values of Σ and N. When σ ∼> σ P , consistent with the above criteria, it was found that the VHMF can underestimate margins for large σ, small Σ and small N. A potential consequence of this underestimate is that the CTV minimum dose can fall below its planned value in more than the prescribed 10% of treatments. The proposed alternative margin algorithm provides better margin

  5. Sources of medical error in refractive surgery.

    Science.gov (United States)

    Moshirfar, Majid; Simpson, Rachel G; Dave, Sonal B; Christiansen, Steven M; Edmonds, Jason N; Culbertson, William W; Pascucci, Stephen E; Sher, Neal A; Cano, David B; Trattler, William B

    2013-05-01

    To evaluate the causes of laser programming errors in refractive surgery and outcomes in these cases. In this multicenter, retrospective chart review, 22 eyes of 18 patients who had incorrect data entered into the refractive laser computer system at the time of treatment were evaluated. Cases were analyzed to uncover the etiology of these errors, patient follow-up treatments, and final outcomes. The results were used to identify potential methods to avoid similar errors in the future. Every patient experienced compromised uncorrected visual acuity requiring additional intervention, and 7 of 22 eyes (32%) lost corrected distance visual acuity (CDVA) of at least one line. Sixteen patients were suitable candidates for additional surgical correction to address these residual visual symptoms and six were not. Thirteen of 22 eyes (59%) received surgical follow-up treatment; nine eyes were treated with contact lenses. After follow-up treatment, six patients (27%) still had a loss of one line or more of CDVA. Three significant sources of error were identified: errors of cylinder conversion, data entry, and patient identification error. Twenty-seven percent of eyes with laser programming errors ultimately lost one or more lines of CDVA. Patients who underwent surgical revision had better outcomes than those who did not. Many of the mistakes identified were likely avoidable had preventive measures been taken, such as strict adherence to patient verification protocol or rigorous rechecking of treatment parameters. Copyright 2013, SLACK Incorporated.

  6. Slotted rotatable target assembly and systematic error analysis for a search for long range spin dependent interactions from exotic vector boson exchange using neutron spin rotation

    Science.gov (United States)

    Haddock, C.; Crawford, B.; Fox, W.; Francis, I.; Holley, A.; Magers, S.; Sarsour, M.; Snow, W. M.; Vanderwerp, J.

    2018-03-01

    We discuss the design and construction of a novel target array of nonmagnetic test masses used in a neutron polarimetry measurement made in search for new possible exotic spin dependent neutron-atominteractions of Nature at sub-mm length scales. This target was designed to accept and efficiently transmit a transversely polarized slow neutron beam through a series of long open parallel slots bounded by flat rectangular plates. These openings possessed equal atom density gradients normal to the slots from the flat test masses with dimensions optimized to achieve maximum sensitivity to an exotic spin-dependent interaction from vector boson exchanges with ranges in the mm - μm regime. The parallel slots were oriented differently in four quadrants that can be rotated about the neutron beam axis in discrete 90°increments using a Geneva drive. The spin rotation signals from the 4 quadrants were measured using a segmented neutron ion chamber to suppress possible systematic errors from stray magnetic fields in the target region. We discuss the per-neutron sensitivity of the target to the exotic interaction, the design constraints, the potential sources of systematic errors which could be present in this design, and our estimate of the achievable sensitivity using this method.

  7. GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology

    Energy Technology Data Exchange (ETDEWEB)

    Mandelbaum, R.; Rowe, B.; Armstrong, R.; Bard, D.; Bertin, E.; Bosch, J.; Boutigny, D.; Courbin, F.; Dawson, W. A.; Donnarumma, A.; Fenech Conti, I.; Gavazzi, R.; Gentile, M.; Gill, M. S. S.; Hogg, D. W.; Huff, E. M.; Jee, M. J.; Kacprzak, T.; Kilbinger, M.; Kuntzer, T.; Lang, D.; Luo, W.; March, M. C.; Marshall, P. J.; Meyers, J. E.; Miller, L.; Miyatake, H.; Nakajima, R.; Ngole Mboula, F. M.; Nurbaeva, G.; Okura, Y.; Paulin-Henriksson, S.; Rhodes, J.; Schneider, M. D.; Shan, H.; Sheldon, E. S.; Simet, M.; Starck, J. -L.; Sureau, F.; Tewes, M.; Zarb Adami, K.; Zhang, J.; Zuntz, J.

    2015-05-01

    We present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods’ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.

  8. Exact Solutions for Internuclear Vectors and Backbone Dihedral Angles from NH Residual Dipolar Couplings in Two Media, and their Application in a Systematic Search Algorithm for Determining Protein Backbone Structure

    International Nuclear Information System (INIS)

    Wang Lincong; Donald, Bruce Randall

    2004-01-01

    We have derived a quartic equation for computing the direction of an internuclear vector from residual dipolar couplings (RDCs) measured in two aligning media, and two simple trigonometric equations for computing the backbone (φ,ψ) angles from two backbone vectors in consecutive peptide planes. These equations make it possible to compute, exactly and in constant time, the backbone (φ,ψ) angles for a residue from RDCs in two media on any single backbone vector type. Building upon these exact solutions we have designed a novel algorithm for determining a protein backbone substructure consisting of α-helices and β-sheets. Our algorithm employs a systematic search technique to refine the conformation of both α-helices and β-sheets and to determine their orientations using exclusively the angular restraints from RDCs. The algorithm computes the backbone substructure employing very sparse distance restraints between pairs of α-helices and β-sheets refined by the systematic search. The algorithm has been demonstrated on the protein human ubiquitin using only backbone NH RDCs, plus twelve hydrogen bonds and four NOE distance restraints. Further, our results show that both the global orientations and the conformations of α-helices and β-strands can be determined with high accuracy using only two RDCs per residue. The algorithm requires, as its input, backbone resonance assignments, the identification of α-helices and β-sheets as well as sparse NOE distance and hydrogen bond restraints.Abbreviations: NMR - nuclear magnetic resonance; RDC - residual dipolar coupling; NOE - nuclear Overhauser effect; SVD - singular value decomposition; DFS - depth-first search; RMSD - root mean square deviation; POF - principal order frame; PDB - protein data bank; SA - simulated annealing; MD - molecular dynamics

  9. Validity and reliability of a novel 3D scanner for assessment of the shape and volume of amputees' residual limb models.

    Directory of Open Access Journals (Sweden)

    Elena Seminati

    Full Text Available Objective assessment methods to monitor residual limb volume following lower-limb amputation are required to enhance practitioner-led prosthetic fitting. Computer aided systems, including 3D scanners, present numerous advantages and the recent Artec Eva scanner, based on laser free technology, could potentially be an effective solution for monitoring residual limb volumes.The aim of this study was to assess the validity and reliability of the Artec Eva scanner (practical measurement against a high precision laser 3D scanner (criterion measurement for the determination of residual limb model shape and volume.Three observers completed three repeat assessments of ten residual limb models, using both the scanners. Validity of the Artec Eva scanner was assessed (mean percentage error <2% and Bland-Altman statistics were adopted to assess the agreement between the two scanners. Intra and inter-rater reliability (repeatability coefficient <5% of the Artec Eva scanner was calculated for measuring indices of residual limb model volume and shape (i.e. residual limb cross sectional areas and perimeters.Residual limb model volumes ranged from 885 to 4399 ml. Mean percentage error of the Artec Eva scanner (validity was 1.4% of the criterion volumes. Correlation coefficients between the Artec Eva and the Romer determined variables were higher than 0.9. Volume intra-rater and inter-rater reliability coefficients were 0.5% and 0.7%, respectively. Shape percentage maximal error was 2% at the distal end of the residual limb, with intra-rater reliability coefficients presenting the lowest errors (0.2%, both for cross sectional areas and perimeters of the residual limb models.The Artec Eva scanner is a valid and reliable method for assessing residual limb model shapes and volumes. While the method needs to be tested on human residual limbs and the results compared with the current system used in clinical practice, it has the potential to quantify shape and volume

  10. Evaluation of rotational set-up errors in patients with thoracic neoplasms

    International Nuclear Information System (INIS)

    Wang Yanyang; Fu Xiaolong; Xia Bing; Fan Min; Yang Huanjun; Ren Jun; Xu Zhiyong; Jiang Guoliang

    2010-01-01

    Objective: To assess the rotational set-up errors in patients with thoracic neoplasms. Methods: 224 kilovoltage cone-beam computed tomography (KVCBCT) scans from 20 thoracic tumor patients were evaluated retrospectively. All these patients were involved in the research of 'Evaluation of the residual set-up error for online kilovoltage cone-beam CT guided thoracic tumor radiation'. Rotational set-up errors, including pitch, roll and yaw, were calculated by 'aligning the KVCBCT with the planning CT, using the semi-automatic alignment method. Results: The average rotational set-up errors were -0.28 degree ±1.52 degree, 0.21 degree ± 0.91 degree and 0.27 degree ± 0.78 degree in the left-fight, superior-inferior and anterior-posterior axis, respectively. The maximal rotational errors of pitch, roll and yaw were 3.5 degree, 2.7 degree and 2.2 degree, respectively. After correction for translational set-up errors, no statistically significant changes in rotational error were observed. Conclusions: The rotational set-up errors in patients with thoracic neoplasms were all small in magnitude. Rotational errors may not change after the correction for translational set-up errors alone, which should be evaluated in a larger sample future. (authors)

  11. Leptogenesis and residual CP symmetry

    International Nuclear Information System (INIS)

    Chen, Peng; Ding, Gui-Jun; King, Stephen F.

    2016-01-01

    We discuss flavour dependent leptogenesis in the framework of lepton flavour models based on discrete flavour and CP symmetries applied to the type-I seesaw model. Working in the flavour basis, we analyse the case of two general residual CP symmetries in the neutrino sector, which corresponds to all possible semi-direct models based on a preserved Z 2 in the neutrino sector, together with a CP symmetry, which constrains the PMNS matrix up to a single free parameter which may be fixed by the reactor angle. We systematically study and classify this case for all possible residual CP symmetries, and show that the R-matrix is tightly constrained up to a single free parameter, with only certain forms being consistent with successful leptogenesis, leading to possible connections between leptogenesis and PMNS parameters. The formalism is completely general in the sense that the two residual CP symmetries could result from any high energy discrete flavour theory which respects any CP symmetry. As a simple example, we apply the formalism to a high energy S 4 flavour symmetry with a generalized CP symmetry, broken to two residual CP symmetries in the neutrino sector, recovering familiar results for PMNS predictions, together with new results for flavour dependent leptogenesis.

  12. A user's manual of Tools for Error Estimation of Complex Number Matrix Computation (Ver.1.0)

    International Nuclear Information System (INIS)

    Ichihara, Kiyoshi.

    1997-03-01

    'Tools for Error Estimation of Complex Number Matrix Computation' is a subroutine library which aids the users in obtaining the error ranges of the complex number linear system's solutions or the Hermitian matrices' eigen values. This library contains routines for both sequential computers and parallel computers. The subroutines for linear system error estimation calulate norms of residual vectors, matrices's condition numbers, error bounds of solutions and so on. The error estimation subroutines for Hermitian matrix eigen values' derive the error ranges of the eigen values according to the Korn-Kato's formula. This user's manual contains a brief mathematical background of error analysis on linear algebra and usage of the subroutines. (author)

  13. Aliasing errors in measurements of beam position and ellipticity

    International Nuclear Information System (INIS)

    Ekdahl, Carl

    2005-01-01

    Beam position monitors (BPMs) are used in accelerators and ion experiments to measure currents, position, and azimuthal asymmetry. These usually consist of discrete arrays of electromagnetic field detectors, with detectors located at several equally spaced azimuthal positions at the beam tube wall. The discrete nature of these arrays introduces systematic errors into the data, independent of uncertainties resulting from signal noise, lack of recording dynamic range, etc. Computer simulations were used to understand and quantify these aliasing errors. If required, aliasing errors can be significantly reduced by employing more than the usual four detectors in the BPMs. These simulations show that the error in measurements of the centroid position of a large beam is indistinguishable from the error in the position of a filament. The simulations also show that aliasing errors in the measurement of beam ellipticity are very large unless the beam is accurately centered. The simulations were used to quantify the aliasing errors in beam parameter measurements during early experiments on the DARHT-II accelerator, demonstrating that they affected the measurements only slightly, if at all

  14. Aliasing errors in measurements of beam position and ellipticity

    Science.gov (United States)

    Ekdahl, Carl

    2005-09-01

    Beam position monitors (BPMs) are used in accelerators and ion experiments to measure currents, position, and azimuthal asymmetry. These usually consist of discrete arrays of electromagnetic field detectors, with detectors located at several equally spaced azimuthal positions at the beam tube wall. The discrete nature of these arrays introduces systematic errors into the data, independent of uncertainties resulting from signal noise, lack of recording dynamic range, etc. Computer simulations were used to understand and quantify these aliasing errors. If required, aliasing errors can be significantly reduced by employing more than the usual four detectors in the BPMs. These simulations show that the error in measurements of the centroid position of a large beam is indistinguishable from the error in the position of a filament. The simulations also show that aliasing errors in the measurement of beam ellipticity are very large unless the beam is accurately centered. The simulations were used to quantify the aliasing errors in beam parameter measurements during early experiments on the DARHT-II accelerator, demonstrating that they affected the measurements only slightly, if at all.

  15. Learner Corpora without Error Tagging

    Directory of Open Access Journals (Sweden)

    Rastelli, Stefano

    2009-01-01

    Full Text Available The article explores the possibility of adopting a form-to-function perspective when annotating learner corpora in order to get deeper insights about systematic features of interlanguage. A split between forms and functions (or categories is desirable in order to avoid the "comparative fallacy" and because – especially in basic varieties – forms may precede functions (e.g., what resembles to a "noun" might have a different function or a function may show up in unexpected forms. In the computer-aided error analysis tradition, all items produced by learners are traced to a grid of error tags which is based on the categories of the target language. Differently, we believe it is possible to record and make retrievable both words and sequence of characters independently from their functional-grammatical label in the target language. For this purpose at the University of Pavia we adapted a probabilistic POS tagger designed for L1 on L2 data. Despite the criticism that this operation can raise, we found that it is better to work with "virtual categories" rather than with errors. The article outlines the theoretical background of the project and shows some examples in which some potential of SLA-oriented (non error-based tagging will be possibly made clearer.

  16. An Adaptive Estimation of Forecast Error Covariance Parameters for Kalman Filtering Data Assimilation

    Institute of Scientific and Technical Information of China (English)

    Xiaogu ZHENG

    2009-01-01

    An adaptive estimation of forecast error covariance matrices is proposed for Kalman filtering data assimilation. A forecast error covariance matrix is initially estimated using an ensemble of perturbation forecasts. This initially estimated matrix is then adjusted with scale parameters that are adaptively estimated by minimizing -2log-likelihood of observed-minus-forecast residuals. The proposed approach could be applied to Kalman filtering data assimilation with imperfect models when the model error statistics are not known. A simple nonlinear model (Burgers' equation model) is used to demonstrate the efficacy of the proposed approach.

  17. Applying Intelligent Algorithms to Automate the Identification of Error Factors.

    Science.gov (United States)

    Jin, Haizhe; Qu, Qingxing; Munechika, Masahiko; Sano, Masataka; Kajihara, Chisato; Duffy, Vincent G; Chen, Han

    2018-05-03

    Medical errors are the manifestation of the defects occurring in medical processes. Extracting and identifying defects as medical error factors from these processes are an effective approach to prevent medical errors. However, it is a difficult and time-consuming task and requires an analyst with a professional medical background. The issues of identifying a method to extract medical error factors and reduce the extraction difficulty need to be resolved. In this research, a systematic methodology to extract and identify error factors in the medical administration process was proposed. The design of the error report, extraction of the error factors, and identification of the error factors were analyzed. Based on 624 medical error cases across four medical institutes in both Japan and China, 19 error-related items and their levels were extracted. After which, they were closely related to 12 error factors. The relational model between the error-related items and error factors was established based on a genetic algorithm (GA)-back-propagation neural network (BPNN) model. Additionally, compared to GA-BPNN, BPNN, partial least squares regression and support vector regression, GA-BPNN exhibited a higher overall prediction accuracy, being able to promptly identify the error factors from the error-related items. The combination of "error-related items, their different levels, and the GA-BPNN model" was proposed as an error-factor identification technology, which could automatically identify medical error factors.

  18. Benchmark test cases for evaluation of computer-based methods for detection of setup errors: realistic digitally reconstructed electronic portal images with known setup errors

    International Nuclear Information System (INIS)

    Fritsch, Daniel S.; Raghavan, Suraj; Boxwala, Aziz; Earnhart, Jon; Tracton, Gregg; Cullip, Timothy; Chaney, Edward L.

    1997-01-01

    Purpose: The purpose of this investigation was to develop methods and software for computing realistic digitally reconstructed electronic portal images with known setup errors for use as benchmark test cases for evaluation and intercomparison of computer-based methods for image matching and detecting setup errors in electronic portal images. Methods and Materials: An existing software tool for computing digitally reconstructed radiographs was modified to compute simulated megavoltage images. An interface was added to allow the user to specify which setup parameter(s) will contain computer-induced random and systematic errors in a reference beam created during virtual simulation. Other software features include options for adding random and structured noise, Gaussian blurring to simulate geometric unsharpness, histogram matching with a 'typical' electronic portal image, specifying individual preferences for the appearance of the 'gold standard' image, and specifying the number of images generated. The visible male computed tomography data set from the National Library of Medicine was used as the planning image. Results: Digitally reconstructed electronic portal images with known setup errors have been generated and used to evaluate our methods for automatic image matching and error detection. Any number of different sets of test cases can be generated to investigate setup errors involving selected setup parameters and anatomic volumes. This approach has proved to be invaluable for determination of error detection sensitivity under ideal (rigid body) conditions and for guiding further development of image matching and error detection methods. Example images have been successfully exported for similar use at other sites. Conclusions: Because absolute truth is known, digitally reconstructed electronic portal images with known setup errors are well suited for evaluation of computer-aided image matching and error detection methods. High-quality planning images, such as

  19. A new systematic calibration method of ring laser gyroscope inertial navigation system

    Science.gov (United States)

    Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Xiong, Zhenyu; Long, Xingwu

    2016-10-01

    Inertial navigation system has been the core component of both military and civil navigation systems. Before the INS is put into application, it is supposed to be calibrated in the laboratory in order to compensate repeatability error caused by manufacturing. Discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed theories of error inspiration and separation in detail and presented a new systematic calibration method for ring laser gyroscope inertial navigation system. Error models and equations of calibrated Inertial Measurement Unit are given. Then proper rotation arrangement orders are depicted in order to establish the linear relationships between the change of velocity errors and calibrated parameter errors. Experiments have been set up to compare the systematic errors calculated by filtering calibration result with those obtained by discrete calibration result. The largest position error and velocity error of filtering calibration result are only 0.18 miles and 0.26m/s compared with 2 miles and 1.46m/s of discrete calibration result. These results have validated the new systematic calibration method and proved its importance for optimal design and accuracy improvement of calibration of mechanically dithered ring laser gyroscope inertial navigation system.

  20. Monitoring residue in animals and primary products of animal origin

    Directory of Open Access Journals (Sweden)

    Janković Saša

    2008-01-01

    Full Text Available The objective of control and systematic monitoring of residue is to secure, by the examination of a corresponding number of samples, the efficient monitoring of the residue level in tissues and organs of animals, as well as in primary products of animal origin. This creates possibilities for the timely taking of measures toward the securing of food hygiene of animal origin and the protection of public health. Residue can be a consequence of the inadequate use of medicines in veterinary medicine and pesticides in agriculture and veterinary medicine, as well as the polluting of the environment with toxic elements, dioxins, polychlorinated biphenyls, and others. Residue is being monitored in Serbia since 1972, and in 2004, national monitoring was brought to the level of EU countries through significant investments by the Ministry of Agriculture, Forestry and Water Management. This is also evident in the EU directives which permit exports of all kinds of meat and primary products of animal origin, covered by the Residue Monitoring Program. The program of systematic examinations of residue has been coordinated with the requirements of the European Union, both according to the type of examined substance, as well as according to the number of samples and the applied analytical techniques. In addition to the development of methods and the including of new harmful substances into the monitoring programme, it is also necessary to coordinate the national regulations that define the maximum permitted quantities of certain medicines and contaminants with the EU regulations, in order to protect the health of consumers as efficiently as possible, and for the country to take equal part in international trade.

  1. Accuracy Improvement of Multi-Axis Systems Based on Laser Correction of Volumetric Geometric Errors

    Science.gov (United States)

    Teleshevsky, V. I.; Sokolov, V. A.; Pimushkin, Ya I.

    2018-04-01

    The article describes a volumetric geometric errors correction method for CNC- controlled multi-axis systems (machine-tools, CMMs etc.). The Kalman’s concept of “Control and Observation” is used. A versatile multi-function laser interferometer is used as Observer in order to measure machine’s error functions. A systematic error map of machine’s workspace is produced based on error functions measurements. The error map results into error correction strategy. The article proposes a new method of error correction strategy forming. The method is based on error distribution within machine’s workspace and a CNC-program postprocessor. The postprocessor provides minimal error values within maximal workspace zone. The results are confirmed by error correction of precision CNC machine-tools.

  2. Instrumental systematics and weak gravitational lensing

    International Nuclear Information System (INIS)

    Mandelbaum, R.

    2015-01-01

    We present a pedagogical review of the weak gravitational lensing measurement process and its connection to major scientific questions such as dark matter and dark energy. Then we describe common ways of parametrizing systematic errors and understanding how they affect weak lensing measurements. Finally, we discuss several instrumental systematics and how they fit into this context, and conclude with some future perspective on how progress can be made in understanding the impact of instrumental systematics on weak lensing measurements

  3. Undesirable effects of covariance matrix techniques for error analysis

    International Nuclear Information System (INIS)

    Seibert, D.

    1994-01-01

    Regression with χ 2 constructed from covariance matrices should not be used for some combinations of covariance matrices and fitting functions. Using the technique for unsuitable combinations can amplify systematic errors. This amplification is uncontrolled, and can produce arbitrarily inaccurate results that might not be ruled out by a χ 2 test. In addition, this technique can give incorrect (artificially small) errors for fit parameters. I give a test for this instability and a more robust (but computationally more intensive) method for fitting correlated data

  4. Evaluation of positioning errors of the patient using cone beam CT megavoltage

    International Nuclear Information System (INIS)

    Garcia Ruiz-Zorrilla, J.; Fernandez Leton, J. P.; Zucca Aparicio, D.; Perez Moreno, J. M.; Minambres Moro, A.

    2013-01-01

    Image-guided radiation therapy allows you to assess and fix the positioning of the patient in the treatment unit, thus reducing the uncertainties due to the positioning of the patient. This work assesses errors systematic and errors of randomness from the corrections made to a series of patients of different diseases through a protocol off line of cone beam CT (CBCT) megavoltage. (Author)

  5. Annotating Protein Functional Residues by Coupling High-Throughput Fitness Profile and Homologous-Structure Analysis

    Directory of Open Access Journals (Sweden)

    Yushen Du

    2016-11-01

    Full Text Available Identification and annotation of functional residues are fundamental questions in protein sequence analysis. Sequence and structure conservation provides valuable information to tackle these questions. It is, however, limited by the incomplete sampling of sequence space in natural evolution. Moreover, proteins often have multiple functions, with overlapping sequences that present challenges to accurate annotation of the exact functions of individual residues by conservation-based methods. Using the influenza A virus PB1 protein as an example, we developed a method to systematically identify and annotate functional residues. We used saturation mutagenesis and high-throughput sequencing to measure the replication capacity of single nucleotide mutations across the entire PB1 protein. After predicting protein stability upon mutations, we identified functional PB1 residues that are essential for viral replication. To further annotate the functional residues important to the canonical or noncanonical functions of viral RNA-dependent RNA polymerase (vRdRp, we performed a homologous-structure analysis with 16 different vRdRp structures. We achieved high sensitivity in annotating the known canonical polymerase functional residues. Moreover, we identified a cluster of noncanonical functional residues located in the loop region of the PB1 β-ribbon. We further demonstrated that these residues were important for PB1 protein nuclear import through the interaction with Ran-binding protein 5. In summary, we developed a systematic and sensitive method to identify and annotate functional residues that are not restrained by sequence conservation. Importantly, this method is generally applicable to other proteins about which homologous-structure information is available.

  6. Measuring depth profiles of residual stress with Raman spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Enloe, W.S.; Sparks, R.G.; Paesler, M.A.

    1988-12-01

    Knowledge of the variation of residual stress is a very important factor in understanding the properties of machined surfaces. The nature of the residual stress can determine a part`s susceptibility to wear deformation, and cracking. Raman spectroscopy is known to be a very useful technique for measuring residual stress in many materials. These measurements are routinely made with a lateral resolution of 1{mu}m and an accuracy of 0.1 kbar. The variation of stress with depth; however, has not received much attention in the past. A novel technique has been developed that allows quantitative measurement of the variation of the residual stress with depth with an accuracy of 10nm in the z direction. Qualitative techniques for determining whether the stress is varying with depth are presented. It is also demonstrated that when the stress is changing over the volume sampled, errors can be introduced if the variation of the stress with depth is ignored. Computer aided data analysis is used to determine the depth dependence of the residual stress.

  7. Review on the Influences of Bagging Treatment on Pesticide Residue in Fruits

    OpenAIRE

    ZHAO Xiao-yun; XIE De-fang

    2018-01-01

    At present, bagging technology has been widely applicated in fruit cultivation. Impact of bagging treatment on the pesticide residues have different results. On the basis of existing achievements, this paper systematically analyzed the influence of different bagging treatments on pesticide residues:such as different ways of applying pesticide, pesticide concentration, number of applying pesticide; bagging materials, bagged layer; the type of pesticide(systemic pesticide, nonendoscopic pestici...

  8. On the errors on Omega(0): Monte Carlo simulations of the EMSS cluster sample

    DEFF Research Database (Denmark)

    Oukbir, J.; Arnaud, M.

    2001-01-01

    We perform Monte Carlo simulations of synthetic EMSS cluster samples, to quantify the systematic errors and the statistical uncertainties on the estimate of Omega (0) derived from fits to the cluster number density evolution and to the X-ray temperature distribution up to z=0.83. We identify...... the scatter around the relation between cluster X-ray luminosity and temperature to be a source of systematic error, of the order of Delta (syst)Omega (0) = 0.09, if not properly taken into account in the modelling. After correcting for this bias, our best Omega (0) is 0.66. The uncertainties on the shape...

  9. Error reduction and parameter optimization of the TAPIR method for fast T1 mapping.

    Science.gov (United States)

    Zaitsev, M; Steinhoff, S; Shah, N J

    2003-06-01

    A methodology is presented for the reduction of both systematic and random errors in T(1) determination using TAPIR, a Look-Locker-based fast T(1) mapping technique. The relations between various sequence parameters were carefully investigated in order to develop recipes for choosing optimal sequence parameters. Theoretical predictions for the optimal flip angle were verified experimentally. Inversion pulse imperfections were identified as the main source of systematic errors in T(1) determination with TAPIR. An effective remedy is demonstrated which includes extension of the measurement protocol to include a special sequence for mapping the inversion efficiency itself. Copyright 2003 Wiley-Liss, Inc.

  10. Understanding errors in EIA projections of energy demand

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, Carolyn; Herrnstadt, Evan; Morgenstern, Richard [Resources for the Future, 1616 P St. NW, Washington, DC 20036 (United States)

    2009-08-15

    This paper investigates the potential for systematic errors in the Energy Information Administration's (EIA) widely used Annual Energy Outlook, focusing on the near- to mid-term projections of energy demand. Based on analysis of the EIA's 22-year projection record, we find a fairly modest but persistent tendency to underestimate total energy demand by an average of 2 percent per year after controlling for projection errors in gross domestic product, oil prices, and heating/cooling degree days. For 14 individual fuels/consuming sectors routinely reported by the EIA, we observe a great deal of directional consistency in the errors over time, ranging up to 7 percent per year. Electric utility renewables, electric utility natural gas, transportation distillate, and residential electricity show significant biases on average. Projections for certain other sectors have significant unexplained errors for selected time horizons. Such independent evaluation can be useful for validating analytic efforts and for prioritizing future model revisions. (author)

  11. Systematic errors in the determination of the spectroscopic g-factor in broadband ferromagnetic resonance spectroscopy: A proposed solution

    Science.gov (United States)

    Gonzalez-Fuentes, C.; Dumas, R. K.; García, C.

    2018-01-01

    A theoretical and experimental study of the influence of small offsets of the magnetic field (δH) on the measurement accuracy of the spectroscopic g-factor (g) and saturation magnetization (Ms) obtained by broadband ferromagnetic resonance (FMR) measurements is presented. The random nature of δH generates systematic and opposite sign deviations of the values of g and Ms with respect to their true values. A δH on the order of a few Oe leads to a ˜10% error of g and Ms for a typical range of frequencies employed in broadband FMR experiments. We propose a simple experimental methodology to significantly minimize the effect of δH on the fitted values of g and Ms, eliminating their apparent dependence in the range of frequencies employed. Our method was successfully tested using broadband FMR measurements on a 5 nm thick Ni80Fe20 film for frequencies ranging between 3 and 17 GHz.

  12. Annotating Protein Functional Residues by Coupling High-Throughput Fitness Profile and Homologous-Structure Analysis.

    Science.gov (United States)

    Du, Yushen; Wu, Nicholas C; Jiang, Lin; Zhang, Tianhao; Gong, Danyang; Shu, Sara; Wu, Ting-Ting; Sun, Ren

    2016-11-01

    Identification and annotation of functional residues are fundamental questions in protein sequence analysis. Sequence and structure conservation provides valuable information to tackle these questions. It is, however, limited by the incomplete sampling of sequence space in natural evolution. Moreover, proteins often have multiple functions, with overlapping sequences that present challenges to accurate annotation of the exact functions of individual residues by conservation-based methods. Using the influenza A virus PB1 protein as an example, we developed a method to systematically identify and annotate functional residues. We used saturation mutagenesis and high-throughput sequencing to measure the replication capacity of single nucleotide mutations across the entire PB1 protein. After predicting protein stability upon mutations, we identified functional PB1 residues that are essential for viral replication. To further annotate the functional residues important to the canonical or noncanonical functions of viral RNA-dependent RNA polymerase (vRdRp), we performed a homologous-structure analysis with 16 different vRdRp structures. We achieved high sensitivity in annotating the known canonical polymerase functional residues. Moreover, we identified a cluster of noncanonical functional residues located in the loop region of the PB1 β-ribbon. We further demonstrated that these residues were important for PB1 protein nuclear import through the interaction with Ran-binding protein 5. In summary, we developed a systematic and sensitive method to identify and annotate functional residues that are not restrained by sequence conservation. Importantly, this method is generally applicable to other proteins about which homologous-structure information is available. To fully comprehend the diverse functions of a protein, it is essential to understand the functionality of individual residues. Current methods are highly dependent on evolutionary sequence conservation, which is

  13. Goal-oriented error estimation for Cahn-Hilliard models of binary phase transition

    KAUST Repository

    van der Zee, Kristoffer G.

    2010-10-27

    A posteriori estimates of errors in quantities of interest are developed for the nonlinear system of evolution equations embodied in the Cahn-Hilliard model of binary phase transition. These involve the analysis of wellposedness of dual backward-in-time problems and the calculation of residuals. Mixed finite element approximations are developed and used to deliver numerical solutions of representative problems in one- and two-dimensional domains. Estimated errors are shown to be quite accurate in these numerical examples. © 2010 Wiley Periodicals, Inc.

  14. Irregular analytical errors in diagnostic testing - a novel concept.

    Science.gov (United States)

    Vogeser, Michael; Seger, Christoph

    2018-02-23

    In laboratory medicine, routine periodic analyses for internal and external quality control measurements interpreted by statistical methods are mandatory for batch clearance. Data analysis of these process-oriented measurements allows for insight into random analytical variation and systematic calibration bias over time. However, in such a setting, any individual sample is not under individual quality control. The quality control measurements act only at the batch level. Quantitative or qualitative data derived for many effects and interferences associated with an individual diagnostic sample can compromise any analyte. It is obvious that a process for a quality-control-sample-based approach of quality assurance is not sensitive to such errors. To address the potential causes and nature of such analytical interference in individual samples more systematically, we suggest the introduction of a new term called the irregular (individual) analytical error. Practically, this term can be applied in any analytical assay that is traceable to a reference measurement system. For an individual sample an irregular analytical error is defined as an inaccuracy (which is the deviation from a reference measurement procedure result) of a test result that is so high it cannot be explained by measurement uncertainty of the utilized routine assay operating within the accepted limitations of the associated process quality control measurements. The deviation can be defined as the linear combination of the process measurement uncertainty and the method bias for the reference measurement system. Such errors should be coined irregular analytical errors of the individual sample. The measurement result is compromised either by an irregular effect associated with the individual composition (matrix) of the sample or an individual single sample associated processing error in the analytical process. Currently, the availability of reference measurement procedures is still highly limited, but LC

  15. Outcomes of a Failure Mode and Effects Analysis for medication errors in pediatric anesthesia.

    Science.gov (United States)

    Martin, Lizabeth D; Grigg, Eliot B; Verma, Shilpa; Latham, Gregory J; Rampersad, Sally E; Martin, Lynn D

    2017-06-01

    The Institute of Medicine has called for development of strategies to prevent medication errors, which are one important cause of preventable harm. Although the field of anesthesiology is considered a leader in patient safety, recent data suggest high medication error rates in anesthesia practice. Unfortunately, few error prevention strategies for anesthesia providers have been implemented. Using Toyota Production System quality improvement methodology, a multidisciplinary team observed 133 h of medication practice in the operating room at a tertiary care freestanding children's hospital. A failure mode and effects analysis was conducted to systematically deconstruct and evaluate each medication handling process step and score possible failure modes to quantify areas of risk. A bundle of five targeted countermeasures were identified and implemented over 12 months. Improvements in syringe labeling (73 to 96%), standardization of medication organization in the anesthesia workspace (0 to 100%), and two-provider infusion checks (23 to 59%) were observed. Medication error reporting improved during the project and was subsequently maintained. After intervention, the median medication error rate decreased from 1.56 to 0.95 per 1000 anesthetics. The frequency of medication error harm events reaching the patient also decreased. Systematic evaluation and standardization of medication handling processes by anesthesia providers in the operating room can decrease medication errors and improve patient safety. © 2017 John Wiley & Sons Ltd.

  16. Orbit error characteristic and distribution of TLE using CHAMP orbit data

    Science.gov (United States)

    Xu, Xiao-li; Xiong, Yong-qing

    2018-02-01

    Space object orbital covariance data is required for collision risk assessments, but publicly accessible two line element (TLE) data does not provide orbital error information. This paper compared historical TLE data and GPS precision ephemerides of CHAMP to assess TLE orbit accuracy from 2002 to 2008, inclusive. TLE error spatial variations with longitude and latitude were calculated to analyze error characteristics and distribution. The results indicate that TLE orbit data are systematically biased from the limited SGP4 model. The biases can reach the level of kilometers, and the sign and magnitude are correlate significantly with longitude.

  17. Scale interactions on diurnal toseasonal timescales and their relevanceto model systematic errors

    Directory of Open Access Journals (Sweden)

    G. Yang

    2003-06-01

    Full Text Available Examples of current research into systematic errors in climate models are used to demonstrate the importance of scale interactions on diurnal,intraseasonal and seasonal timescales for the mean and variability of the tropical climate system. It has enabled some conclusions to be drawn about possible processes that may need to be represented, and some recommendations to be made regarding model improvements. It has been shown that the Maritime Continent heat source is a major driver of the global circulation but yet is poorly represented in GCMs. A new climatology of the diurnal cycle has been used to provide compelling evidence of important land-sea breeze and gravity wave effects, which may play a crucial role in the heat and moisture budget of this key region for the tropical and global circulation. The role of the diurnal cycle has also been emphasized for intraseasonal variability associated with the Madden Julian Oscillation (MJO. It is suggested that the diurnal cycle in Sea Surface Temperature (SST during the suppressed phase of the MJO leads to a triggering of cumulus congestus clouds, which serve to moisten the free troposphere and hence precondition the atmosphere for the next active phase. It has been further shown that coupling between the ocean and atmosphere on intraseasonal timescales leads to a more realistic simulation of the MJO. These results stress the need for models to be able to simulate firstly, the observed tri-modal distribution of convection, and secondly, the coupling between the ocean and atmosphere on diurnal to intraseasonal timescales. It is argued, however, that the current representation of the ocean mixed layer in coupled models is not adequate to represent the complex structure of the observed mixed layer, in particular the formation of salinity barrier layers which can potentially provide much stronger local coupling between the atmosphere and ocean on diurnal to intraseasonal timescales.

  18. Demonstration Integrated Knowledge-Based System for Estimating Human Error Probabilities

    Energy Technology Data Exchange (ETDEWEB)

    Auflick, Jack L.

    1999-04-21

    Human Reliability Analysis (HRA) is currently comprised of at least 40 different methods that are used to analyze, predict, and evaluate human performance in probabilistic terms. Systematic HRAs allow analysts to examine human-machine relationships, identify error-likely situations, and provide estimates of relative frequencies for human errors on critical tasks, highlighting the most beneficial areas for system improvements. Unfortunately, each of HRA's methods has a different philosophical approach, thereby producing estimates of human error probabilities (HEPs) that area better or worse match to the error likely situation of interest. Poor selection of methodology, or the improper application of techniques can produce invalid HEP estimates, where that erroneous estimation of potential human failure could have potentially severe consequences in terms of the estimated occurrence of injury, death, and/or property damage.

  19. Human Error and the International Space Station: Challenges and Triumphs in Science Operations

    Science.gov (United States)

    Harris, Samantha S.; Simpson, Beau C.

    2016-01-01

    Any system with a human component is inherently risky. Studies in human factors and psychology have repeatedly shown that human operators will inevitably make errors, regardless of how well they are trained. Onboard the International Space Station (ISS) where crew time is arguably the most valuable resource, errors by the crew or ground operators can be costly to critical science objectives. Operations experts at the ISS Payload Operations Integration Center (POIC), located at NASA's Marshall Space Flight Center in Huntsville, Alabama, have learned that from payload concept development through execution, there are countless opportunities to introduce errors that can potentially result in costly losses of crew time and science. To effectively address this challenge, we must approach the design, testing, and operation processes with two specific goals in mind. First, a systematic approach to error and human centered design methodology should be implemented to minimize opportunities for user error. Second, we must assume that human errors will be made and enable rapid identification and recoverability when they occur. While a systematic approach and human centered development process can go a long way toward eliminating error, the complete exclusion of operator error is not a reasonable expectation. The ISS environment in particular poses challenging conditions, especially for flight controllers and astronauts. Operating a scientific laboratory 250 miles above the Earth is a complicated and dangerous task with high stakes and a steep learning curve. While human error is a reality that may never be fully eliminated, smart implementation of carefully chosen tools and techniques can go a long way toward minimizing risk and increasing the efficiency of NASA's space science operations.

  20. Human error in strabismus surgery: Quantification with a sensitivity analysis

    NARCIS (Netherlands)

    S. Schutte (Sander); J.R. Polling (Jan Roelof); F.C.T. van der Helm (Frans); H.J. Simonsz (Huib)

    2009-01-01

    textabstractBackground: Reoperations are frequently necessary in strabismus surgery. The goal of this study was to analyze human-error related factors that introduce variability in the results of strabismus surgery in a systematic fashion. Methods: We identified the primary factors that influence

  1. Investigation of Diesel’s Residual Noise on Predictive Vehicles Noise Cancelling using LMS Adaptive Algorithm

    Science.gov (United States)

    Arttini Dwi Prasetyowati, Sri; Susanto, Adhi; Widihastuti, Ida

    2017-04-01

    Every noise problems require different solution. In this research, the noise that must be cancelled comes from roadway. Least Mean Square (LMS) adaptive is one of the algorithm that can be used to cancel that noise. Residual noise always appears and could not be erased completely. This research aims to know the characteristic of residual noise from vehicle’s noise and analysis so that it is no longer appearing as a problem. LMS algorithm was used to predict the vehicle’s noise and minimize the error. The distribution of the residual noise could be observed to determine the specificity of the residual noise. The statistic of the residual noise close to normal distribution with = 0,0435, = 1,13 and the autocorrelation of the residual noise forming impulse. As a conclusion the residual noise is insignificant.

  2. Errors and Correction of Precipitation Measurements in China

    Institute of Scientific and Technical Information of China (English)

    REN Zhihua; LI Mingqin

    2007-01-01

    In order to discover the range of various errors in Chinese precipitation measurements and seek a correction method, 30 precipitation evaluation stations were set up countrywide before 1993. All the stations are reference stations in China. To seek a correction method for wind-induced error, a precipitation correction instrument called the "horizontal precipitation gauge" was devised beforehand. Field intercomparison observations regarding 29,000 precipitation events have been conducted using one pit gauge, two elevated operational gauges and one horizontal gauge at the above 30 stations. The range of precipitation measurement errors in China is obtained by analysis of intercomparison measurement results. The distribution of random errors and systematic errors in precipitation measurements are studied in this paper.A correction method, especially for wind-induced errors, is developed. The results prove that a correlation of power function exists between the precipitation amount caught by the horizontal gauge and the absolute difference of observations implemented by the operational gauge and pit gauge. The correlation coefficient is 0.99. For operational observations, precipitation correction can be carried out only by parallel observation with a horizontal precipitation gauge. The precipitation accuracy after correction approaches that of the pit gauge. The correction method developed is simple and feasible.

  3. Controlling qubit drift by recycling error correction syndromes

    Science.gov (United States)

    Blume-Kohout, Robin

    2015-03-01

    Physical qubits are susceptible to systematic drift, above and beyond the stochastic Markovian noise that motivates quantum error correction. This parameter drift must be compensated - if it is ignored, error rates will rise to intolerable levels - but compensation requires knowing the parameters' current value, which appears to require halting experimental work to recalibrate (e.g. via quantum tomography). Fortunately, this is untrue. I show how to perform on-the-fly recalibration on the physical qubits in an error correcting code, using only information from the error correction syndromes. The algorithm for detecting and compensating drift is very simple - yet, remarkably, when used to compensate Brownian drift in the qubit Hamiltonian, it achieves a stabilized error rate very close to the theoretical lower bound. Against 1/f noise, it is less effective only because 1/f noise is (like white noise) dominated by high-frequency fluctuations that are uncompensatable. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE

  4. The probability and the management of human error

    International Nuclear Information System (INIS)

    Dufey, R.B.; Saull, J.W.

    2004-01-01

    Embedded within modern technological systems, human error is the largest, and indeed dominant contributor to accident cause. The consequences dominate the risk profiles for nuclear power and for many other technologies. We need to quantify the probability of human error for the system as an integral contribution within the overall system failure, as it is generally not separable or predictable for actual events. We also need to provide a means to manage and effectively reduce the failure (error) rate. The fact that humans learn from their mistakes allows a new determination of the dynamic probability and human failure (error) rate in technological systems. The result is consistent with and derived from the available world data for modern technological systems. Comparisons are made to actual data from large technological systems and recent catastrophes. Best estimate values and relationships can be derived for both the human error rate, and for the probability. We describe the potential for new approaches to the management of human error and safety indicators, based on the principles of error state exclusion and of the systematic effect of learning. A new equation is given for the probability of human error (λ) that combines the influences of early inexperience, learning from experience (ε) and stochastic occurrences with having a finite minimum rate, this equation is λ 5.10 -5 + ((1/ε) - 5.10 -5 ) exp(-3*ε). The future failure rate is entirely determined by the experience: thus the past defines the future

  5. Nuclear power plant personnel errors in decision-making as an object of probabilistic risk assessment

    International Nuclear Information System (INIS)

    Reer, B.

    1993-09-01

    The integration of human error - also called man-machine system analysis (MMSA) - is an essential part of probabilistic risk assessment (PRA). A new method is presented which allows for a systematic and comprehensive PRA inclusions of decision-based errors due to conflicts or similarities. For the error identification procedure, new question techniques are developed. These errors are shown to be identified by looking at retroactions caused by subordinate goals as components of the overall safety relevant goal. New quantification methods for estimating situation-specific probabilities are developed. The factors conflict and similarity are operationalized in a way that allows their quantification based on informations which are usually available in PRA. The quantification procedure uses extrapolations and interpolations based on a poor set of data related to decision-based errors. Moreover, for passive errors in decision-making a completely new approach is presented where errors are quantified via a delay initiating the required action rather than via error probabilities. The practicability of this dynamic approach is demonstrated by a probabilistic analysis of the actions required during the total loss of feedwater event at the Davis-Besse plant 1985. The extensions of the ''classical'' PRA method developed in this work are applied to a MMSA of the decay heat removal (DHR) of the ''HTR-500''. Errors in decision-making - as potential roots of extraneous acts - are taken into account in a comprehensive and systematic manner. Five additional errors are identified. However, the probabilistic quantification results a nonsignificant increase of the DHR failure probability. (orig.) [de

  6. Human error in strabismus surgery : Quantification with a sensitivity analysis

    NARCIS (Netherlands)

    Schutte, S.; Polling, J.R.; Van der Helm, F.C.T.; Simonsz, H.J.

    2008-01-01

    Background- Reoperations are frequently necessary in strabismus surgery. The goal of this study was to analyze human-error related factors that introduce variability in the results of strabismus surgery in a systematic fashion. Methods- We identified the primary factors that influence the outcome of

  7. Nature of the Refractive Errors in Rhesus Monkeys (Macaca mulatta) with Experimentally Induced Ametropias

    Science.gov (United States)

    Qiao-Grider, Ying; Hung, Li-Fang; Kee, Chea-su; Ramamirtham, Ramkumar; Smith, Earl L.

    2010-01-01

    We analyzed the contribution of individual ocular components to vision-induced ametropias in 210 rhesus monkeys. The primary contribution to refractive-error development came from vitreous chamber depth; a minor contribution from corneal power was also detected. However, there was no systematic relationship between refractive error and anterior chamber depth or between refractive error and any crystalline lens parameter. Our results are in good agreement with previous studies in humans, suggesting that the refractive errors commonly observed in humans are created by vision-dependent mechanisms that are similar to those operating in monkeys. This concordance emphasizes the applicability of rhesus monkeys in refractive-error studies. PMID:20600237

  8. Sequential Ensembles Tolerant to Synthetic Aperture Radar (SAR Soil Moisture Retrieval Errors

    Directory of Open Access Journals (Sweden)

    Ju Hyoung Lee

    2016-04-01

    Full Text Available Due to complicated and undefined systematic errors in satellite observation, data assimilation integrating model states with satellite observations is more complicated than field measurements-based data assimilation at a local scale. In the case of Synthetic Aperture Radar (SAR soil moisture, the systematic errors arising from uncertainties in roughness conditions are significant and unavoidable, but current satellite bias correction methods do not resolve the problems very well. Thus, apart from the bias correction process of satellite observation, it is important to assess the inherent capability of satellite data assimilation in such sub-optimal but more realistic observational error conditions. To this end, time-evolving sequential ensembles of the Ensemble Kalman Filter (EnKF is compared with stationary ensemble of the Ensemble Optimal Interpolation (EnOI scheme that does not evolve the ensembles over time. As the sensitivity analysis demonstrated that the surface roughness is more sensitive to the SAR retrievals than measurement errors, it is a scope of this study to monitor how data assimilation alters the effects of roughness on SAR soil moisture retrievals. In results, two data assimilation schemes all provided intermediate values between SAR overestimation, and model underestimation. However, under the same SAR observational error conditions, the sequential ensembles approached a calibrated model showing the lowest Root Mean Square Error (RMSE, while the stationary ensemble converged towards the SAR observations exhibiting the highest RMSE. As compared to stationary ensembles, sequential ensembles have a better tolerance to SAR retrieval errors. Such inherent nature of EnKF suggests an operational merit as a satellite data assimilation system, due to the limitation of bias correction methods currently available.

  9. Mapping the N-Z plane: residual mass regularities

    International Nuclear Information System (INIS)

    Hirsch, J.G.; Frank, A.; Velazquez, V.

    2004-01-01

    A new development in the study of the deviations between experimental nuclear masses and those calculated in the framework of the Finite Range Droplet Model is introduced. Some frequencies are isolated and used in a simple fit to reduce significantly the error width. The presence of this regular residual correlations suggests that the Strutinsky method of including microscopic fluctuations in nuclear masses could be improved. (Author)

  10. Advancing the research agenda for diagnostic error reduction.

    Science.gov (United States)

    Zwaan, Laura; Schiff, Gordon D; Singh, Hardeep

    2013-10-01

    Diagnostic errors remain an underemphasised and understudied area of patient safety research. We briefly summarise the methods that have been used to conduct research on epidemiology, contributing factors and interventions related to diagnostic error and outline directions for future research. Research methods that have studied epidemiology of diagnostic error provide some estimate on diagnostic error rates. However, there appears to be a large variability in the reported rates due to the heterogeneity of definitions and study methods used. Thus, future methods should focus on obtaining more precise estimates in different settings of care. This would lay the foundation for measuring error rates over time to evaluate improvements. Research methods have studied contributing factors for diagnostic error in both naturalistic and experimental settings. Both approaches have revealed important and complementary information. Newer conceptual models from outside healthcare are needed to advance the depth and rigour of analysis of systems and cognitive insights of causes of error. While the literature has suggested many potentially fruitful interventions for reducing diagnostic errors, most have not been systematically evaluated and/or widely implemented in practice. Research is needed to study promising intervention areas such as enhanced patient involvement in diagnosis, improving diagnosis through the use of electronic tools and identification and reduction of specific diagnostic process 'pitfalls' (eg, failure to conduct appropriate diagnostic evaluation of a breast lump after a 'normal' mammogram). The last decade of research on diagnostic error has made promising steps and laid a foundation for more rigorous methods to advance the field.

  11. Potential Errors and Test Assessment in Software Product Line Engineering

    Directory of Open Access Journals (Sweden)

    Hartmut Lackner

    2015-04-01

    Full Text Available Software product lines (SPL are a method for the development of variant-rich software systems. Compared to non-variable systems, testing SPLs is extensive due to an increasingly amount of possible products. Different approaches exist for testing SPLs, but there is less research for assessing the quality of these tests by means of error detection capability. Such test assessment is based on error injection into correct version of the system under test. However to our knowledge, potential errors in SPL engineering have never been systematically identified before. This article presents an overview over existing paradigms for specifying software product lines and the errors that can occur during the respective specification processes. For assessment of test quality, we leverage mutation testing techniques to SPL engineering and implement the identified errors as mutation operators. This allows us to run existing tests against defective products for the purpose of test assessment. From the results, we draw conclusions about the error-proneness of the surveyed SPL design paradigms and how quality of SPL tests can be improved.

  12. Effect of residual patient motion on dose distribution during image-guided robotic radiosurgery for skull tracking based on log file analysis

    International Nuclear Information System (INIS)

    Inoue, Mitsuhiro; Shiomi, Hiroya; Sato, Kengo

    2014-01-01

    The present study aimed to assess the effect of residual patient motion on dose distribution during intracranial image-guided robotic radiosurgery by analyzing the system log files. The dosimetric effect was analyzed according to the difference between the original and estimated dose distributions, including targeting error, caused by residual patient motion between two successive image acquisitions. One hundred twenty-eight treatments were analyzed. Forty-two patients were treated using the isocentric plan, and 86 patients were treated using the conformal (non-isocentric) plan. The median distance from the imaging center to the target was 55 mm, and the median interval between the acquisitions of sequential images was 79 s. The median translational residual patient motion was 0.1 mm for each axis, and the rotational residual patient motion was 0.1 deg for Δpitch and Δroll and 0.2 deg for Δyaw. The dose error for D 95 was within 1% in more than 95% of cases. The maximum dose error for D 10 to D 90 was within 2%. None of the studied parameters, including the interval between the acquisitions of sequential images, was significantly related to the dosimetric effect. The effect of residual patient motion on dose distribution was minimal. (author)

  13. Applying lessons learned to enhance human performance and reduce human error for ISS operations

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, W.R.

    1998-09-01

    A major component of reliability, safety, and mission success for space missions is ensuring that the humans involved (flight crew, ground crew, mission control, etc.) perform their tasks and functions as required. This includes compliance with training and procedures during normal conditions, and successful compensation when malfunctions or unexpected conditions occur. A very significant issue that affects human performance in space flight is human error. Human errors can invalidate carefully designed equipment and procedures. If certain errors combine with equipment failures or design flaws, mission failure or loss of life can occur. The control of human error during operation of the International Space Station (ISS) will be critical to the overall success of the program. As experience from Mir operations has shown, human performance plays a vital role in the success or failure of long duration space missions. The Department of Energy`s Idaho National Engineering and Environmental Laboratory (INEEL) is developed a systematic approach to enhance human performance and reduce human errors for ISS operations. This approach is based on the systematic identification and evaluation of lessons learned from past space missions such as Mir to enhance the design and operation of ISS. This paper describes previous INEEL research on human error sponsored by NASA and how it can be applied to enhance human reliability for ISS.

  14. Hospital medication errors in a pharmacovigilance system in Colombia

    Directory of Open Access Journals (Sweden)

    Jorge Enrique Machado-Alba

    2015-11-01

    Full Text Available Objective: this study analyzes the medication errors reported to a pharmacovigilance system by 26 hospitals for patients in the healthcare system of Colombia. Methods: this retrospective study analyzed the medication errors reported to a systematized database between 1 January 2008 and 12 September 2013. The medication is dispensed by the company Audifarma S.A. to hospitals and clinics around Colombia. Data were classified according to the taxonomy of the National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP. The data analysis was performed using SPSS 22.0 for Windows, considering p-values < 0.05 significant. Results: there were 9 062 medication errors in 45 hospital pharmacies. Real errors accounted for 51.9% (n = 4 707, of which 12.0% (n = 567 reached the patient (Categories C to I and caused harm (Categories E to I to 17 subjects (0.36%. The main process involved in errors that occurred (categories B to I was prescription (n = 1 758, 37.3%, followed by dispensation (n = 1 737, 36.9%, transcription (n = 970, 20.6% and administration (n = 242, 5.1%. The errors in the administration process were 45.2 times more likely to reach the patient (CI 95%: 20.2–100.9. Conclusions: medication error reporting systems and prevention strategies should be widespread in hospital settings, prioritizing efforts to address the administration process.

  15. Random measurement error: Why worry? An example of cardiovascular risk factors.

    Science.gov (United States)

    Brakenhoff, Timo B; van Smeden, Maarten; Visseren, Frank L J; Groenwold, Rolf H H

    2018-01-01

    With the increased use of data not originally recorded for research, such as routine care data (or 'big data'), measurement error is bound to become an increasingly relevant problem in medical research. A common view among medical researchers on the influence of random measurement error (i.e. classical measurement error) is that its presence leads to some degree of systematic underestimation of studied exposure-outcome relations (i.e. attenuation of the effect estimate). For the common situation where the analysis involves at least one exposure and one confounder, we demonstrate that the direction of effect of random measurement error on the estimated exposure-outcome relations can be difficult to anticipate. Using three example studies on cardiovascular risk factors, we illustrate that random measurement error in the exposure and/or confounder can lead to underestimation as well as overestimation of exposure-outcome relations. We therefore advise medical researchers to refrain from making claims about the direction of effect of measurement error in their manuscripts, unless the appropriate inferential tools are used to study or alleviate the impact of measurement error from the analysis.

  16. Error-related brain activity and error awareness in an error classification paradigm.

    Science.gov (United States)

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Taylor-series and Monte-Carlo-method uncertainty estimation of the width of a probability distribution based on varying bias and random error

    International Nuclear Information System (INIS)

    Wilson, Brandon M; Smith, Barton L

    2013-01-01

    Uncertainties are typically assumed to be constant or a linear function of the measured value; however, this is generally not true. Particle image velocimetry (PIV) is one example of a measurement technique that has highly nonlinear, time varying local uncertainties. Traditional uncertainty methods are not adequate for the estimation of the uncertainty of measurement statistics (mean and variance) in the presence of nonlinear, time varying errors. Propagation of instantaneous uncertainty estimates into measured statistics is performed allowing accurate uncertainty quantification of time-mean and statistics of measurements such as PIV. It is shown that random errors will always elevate the measured variance, and thus turbulent statistics such as u'u'-bar. Within this paper, nonlinear, time varying errors are propagated from instantaneous measurements into the measured mean and variance using the Taylor-series method. With these results and knowledge of the systematic and random uncertainty of each measurement, the uncertainty of the time-mean, the variance and covariance can be found. Applicability of the Taylor-series uncertainty equations to time varying systematic and random errors and asymmetric error distributions are demonstrated with Monte-Carlo simulations. The Taylor-series uncertainty estimates are always accurate for uncertainties on the mean quantity. The Taylor-series variance uncertainty is similar to the Monte-Carlo results for cases in which asymmetric random errors exist or the magnitude of the instantaneous variations in the random and systematic errors is near the ‘true’ variance. However, the Taylor-series method overpredicts the uncertainty in the variance as the instantaneous variations of systematic errors are large or are on the same order of magnitude as the ‘true’ variance. (paper)

  18. THE DISKMASS SURVEY. II. ERROR BUDGET

    International Nuclear Information System (INIS)

    Bershady, Matthew A.; Westfall, Kyle B.; Verheijen, Marc A. W.; Martinsson, Thomas; Andersen, David R.; Swaters, Rob A.

    2010-01-01

    We present a performance analysis of the DiskMass Survey. The survey uses collisionless tracers in the form of disk stars to measure the surface density of spiral disks, to provide an absolute calibration of the stellar mass-to-light ratio (Υ * ), and to yield robust estimates of the dark-matter halo density profile in the inner regions of galaxies. We find that a disk inclination range of 25 0 -35 0 is optimal for our measurements, consistent with our survey design to select nearly face-on galaxies. Uncertainties in disk scale heights are significant, but can be estimated from radial scale lengths to 25% now, and more precisely in the future. We detail the spectroscopic analysis used to derive line-of-sight velocity dispersions, precise at low surface-brightness, and accurate in the presence of composite stellar populations. Our methods take full advantage of large-grasp integral-field spectroscopy and an extensive library of observed stars. We show that the baryon-to-total mass fraction (F bar ) is not a well-defined observational quantity because it is coupled to the halo mass model. This remains true even when the disk mass is known and spatially extended rotation curves are available. In contrast, the fraction of the rotation speed supplied by the disk at 2.2 scale lengths (disk maximality) is a robust observational indicator of the baryonic disk contribution to the potential. We construct the error budget for the key quantities: dynamical disk mass surface density (Σ dyn ), disk stellar mass-to-light ratio (Υ disk * ), and disk maximality (F *,max disk ≡V disk *,max / V c ). Random and systematic errors in these quantities for individual galaxies will be ∼25%, while survey precision for sample quartiles are reduced to 10%, largely devoid of systematic errors outside of distance uncertainties.

  19. Construction of a predictive model for concentration of nickel and vanadium in vacuum residues of crude oils using artificial neural networks and LIBS.

    Science.gov (United States)

    Tarazona, José L; Guerrero, Jáder; Cabanzo, Rafael; Mejía-Ospino, E

    2012-03-01

    A predictive model to determine the concentration of nickel and vanadium in vacuum residues of Colombian crude oils using laser-induced breakdown spectroscopy (LIBS) and artificial neural networks (ANNs) with nodes distributed in multiple layers (multilayer perceptron) is presented. ANN inputs are intensity values in the vicinity of the emission lines 300.248, 301.200 and 305.081 nm of the Ni(I), and 309.310, 310.229, and 311.070 nm of the V(II). The effects of varying number of nodes and the initial weights and biases in the ANNs were systematically explored. Average relative error of calibration/prediction (REC/REP) and average relative standard deviation (RSD) metrics were used to evaluate the performance of the ANN in the prediction of concentrations of two elements studied here. © 2012 Optical Society of America

  20. Modeling Systematic Error Effects for a Sensitive Storage Ring EDM Polarimeter

    Science.gov (United States)

    Stephenson, Edward; Imig, Astrid

    2009-10-01

    The Storage Ring EDM Collaboration has obtained a set of measurements detailing the sensitivity of a storage ring polarimeter for deuterons to small geometrical and rate changes. Various schemes, such as the calculation of the cross ratio [1], can cancel effects due to detector acceptance differences and luminosity differences for states of opposite polarization. Such schemes fail at second-order in the errors, becoming sensitive to geometrical changes, polarization magnitude differences between opposite polarization states, and changes to the detector response with changing data rates. An expansion of the polarimeter response in a Taylor series based on small errors about the polarimeter operating point can parametrize such effects, primarily in terms of the logarithmic derivatives of the cross section and analyzing power. A comparison will be made to measurements obtained with the EDDA detector at COSY-J"ulich. [4pt] [1] G.G. Ohlsen and P.W. Keaton, Jr., NIM 109, 41 (1973).

  1. Systematic Review of Uit Parameters on Residual Stresses of Sensitized AA5456 and Field Based Residual Stress Measurements for Predicting and Mitigating Stress Corrosion Cracking

    Science.gov (United States)

    2014-03-01

    University Press, 2009, pp. 820–824. [30] S. Kou, Welding Metallurgy , 2nd ed. Hoboken, NJ: John Wiley and Sons, Inc., 2003. [31] M. N.James et al...around welds in aluminum ship structures both in the laboratory and in the field. Tensile residual stresses are often generated during welding and, in...mitigate and even reverse these tensile residual stresses. This research uses x-ray diffraction to measure residual stresses around welds in AA5456 before

  2. Analysis of systematic error deviation of water temperature measurement at the fuel channel outlet of the reactor Maria

    International Nuclear Information System (INIS)

    Bykowski, W.

    2000-01-01

    The reactor Maria has two primary cooling circuits; fuel channels cooling circuit and reactor pool cooling circuit. Fuel elements are placed inside the fuel channels which are parallely linked in parallel, between the collectors. In the course of reactor operation the following measurements are performed: continuous measurement of water temperature at the fuel channels inlet, continuous measurement of water temperature at the outlet of each fuel channel and continuous measurement of water flow rate through each fuel channel. Based on those thermal-hydraulic parameters the instantaneous thermal power generated in each fuel channel is determined and by use of that value the thermal balance and the degree of fuel burnup is assessed. The work contains an analysis concerning estimate of the systematic error of temperature measurement at outlet of each fuel channel and so the erroneous assessment of thermal power extracted in each fuel channel and the burnup degree for the individual fuel element. The results of measurements of separate factors of deviations for the fuel channels are enclosed. (author)

  3. Using Fault Trees to Advance Understanding of Diagnostic Errors.

    Science.gov (United States)

    Rogith, Deevakar; Iyengar, M Sriram; Singh, Hardeep

    2017-11-01

    Diagnostic errors annually affect at least 5% of adults in the outpatient setting in the United States. Formal analytic techniques are only infrequently used to understand them, in part because of the complexity of diagnostic processes and clinical work flows involved. In this article, diagnostic errors were modeled using fault tree analysis (FTA), a form of root cause analysis that has been successfully used in other high-complexity, high-risk contexts. How factors contributing to diagnostic errors can be systematically modeled by FTA to inform error understanding and error prevention is demonstrated. A team of three experts reviewed 10 published cases of diagnostic error and constructed fault trees. The fault trees were modeled according to currently available conceptual frameworks characterizing diagnostic error. The 10 trees were then synthesized into a single fault tree to identify common contributing factors and pathways leading to diagnostic error. FTA is a visual, structured, deductive approach that depicts the temporal sequence of events and their interactions in a formal logical hierarchy. The visual FTA enables easier understanding of causative processes and cognitive and system factors, as well as rapid identification of common pathways and interactions in a unified fashion. In addition, it enables calculation of empirical estimates for causative pathways. Thus, fault trees might provide a useful framework for both quantitative and qualitative analysis of diagnostic errors. Future directions include establishing validity and reliability by modeling a wider range of error cases, conducting quantitative evaluations, and undertaking deeper exploration of other FTA capabilities. Copyright © 2017 The Joint Commission. Published by Elsevier Inc. All rights reserved.

  4. Coping with human errors through system design: Implications for ecological interface design

    DEFF Research Database (Denmark)

    Rasmussen, Jens; Vicente, Kim J.

    1989-01-01

    Research during recent years has revealed that human errors are not stochastic events which can be removed through improved training programs or optimal interface design. Rather, errors tend to reflect either systematic interference between various models, rules, and schemata, or the effects...... of the adaptive mechanisms involved in learning. In terms of design implications, these findings suggest that reliable human-system interaction will be achieved by designing interfaces which tend to minimize the potential for control interference and support recovery from errors. In other words, the focus should...... be on control of the effects of errors rather than on the elimination of errors per se. In this paper, we propose a theoretical framework for interface design that attempts to satisfy these objectives. The goal of our framework, called ecological interface design, is to develop a meaningful representation...

  5. Error analysis of isotope dilution mass spectrometry method with internal standard

    International Nuclear Information System (INIS)

    Rizhinskii, M.W.; Vitinskii, M.Y.

    1989-02-01

    The computation algorithms of the normalized isotopic ratios and element concentration by isotope dilution mass spectrometry with internal standard are presented. A procedure based on the Monte-Carlo calculation is proposed for predicting the magnitude of the errors to be expected. The estimation of systematic and random errors is carried out in the case of the certification of uranium and plutonium reference materials as well as for the use of those reference materials in the analysis of irradiated nuclear fuels. 4 refs, 11 figs, 2 tabs

  6. Evaluation of Analysis by Cross-Validation, Part II: Diagnostic and Optimization of Analysis Error Covariance

    Directory of Open Access Journals (Sweden)

    Richard Ménard

    2018-02-01

    Full Text Available We present a general theory of estimation of analysis error covariances based on cross-validation as well as a geometric interpretation of the method. In particular, we use the variance of passive observation-minus-analysis residuals and show that the true analysis error variance can be estimated, without relying on the optimality assumption. This approach is used to obtain near optimal analyses that are then used to evaluate the air quality analysis error using several different methods at active and passive observation sites. We compare the estimates according to the method of Hollingsworth-Lönnberg, Desroziers et al., a new diagnostic we developed, and the perceived analysis error computed from the analysis scheme, to conclude that, as long as the analysis is near optimal, all estimates agree within a certain error margin.

  7. Applications of human error analysis to aviation and space operations

    International Nuclear Information System (INIS)

    Nelson, W.R.

    1998-01-01

    For the past several years at the Idaho National Engineering and Environmental Laboratory (INEEL) we have been working to apply methods of human error analysis to the design of complex systems. We have focused on adapting human reliability analysis (HRA) methods that were developed for Probabilistic Safety Assessment (PSA) for application to system design. We are developing methods so that human errors can be systematically identified during system design, the potential consequences of each error can be assessed, and potential corrective actions (e.g. changes to system design or procedures) can be identified. These applications lead to different requirements when compared with HR.As performed as part of a PSA. For example, because the analysis will begin early during the design stage, the methods must be usable when only partial design information is available. In addition, the ability to perform numerous ''what if'' analyses to identify and compare multiple design alternatives is essential. Finally, since the goals of such human error analyses focus on proactive design changes rather than the estimate of failure probabilities for PRA, there is more emphasis on qualitative evaluations of error relationships and causal factors than on quantitative estimates of error frequency. The primary vehicle we have used to develop and apply these methods has been a series of prqjects sponsored by the National Aeronautics and Space Administration (NASA) to apply human error analysis to aviation operations. The first NASA-sponsored project had the goal to evaluate human errors caused by advanced cockpit automation. Our next aviation project focused on the development of methods and tools to apply human error analysis to the design of commercial aircraft. This project was performed by a consortium comprised of INEEL, NASA, and Boeing Commercial Airplane Group. The focus of the project was aircraft design and procedures that could lead to human errors during airplane maintenance

  8. Residual power series method for fractional Sharma-Tasso-Olever equation

    Directory of Open Access Journals (Sweden)

    Amit Kumar

    2016-02-01

    Full Text Available In this paper, we introduce a modified analytical approximate technique to obtain solution of time fractional Sharma-Tasso-Olever equation. First, we present an alternative framework of the Residual power series method (RPSM which can be used simply and effectively to handle nonlinear fractional differential equations arising in several physical phenomena. This method is basically based on the generalized Taylor series formula and residual error function. A good result is found between our solution and the given solution. It is shown that the proposed method is reliable, efficient and easy to implement on all kinds of fractional nonlinear problems arising in science and technology.

  9. Systematic shifts of evaluated charge centroid for the cathode read-out multiwire proportional chamber

    International Nuclear Information System (INIS)

    Endo, I.; Kawamoto, T.; Mizuno, Y.; Ohsugi, T.; Taniguchi, T.; Takeshita, T.

    1981-01-01

    We have investigated the systematic error associtated with the charge centroid evaluation for the cathode read-out multiwire proportional chamber. Correction curves for the systematic error according to six centroid finding algorithms have been obtained by using the charge distribution calculated in a simple electrostatic mode. They have been experimentally examined and proved to be essential for the accurate determination of the irradiated position. (orig.)

  10. Addressing the Problem of Negative Lexical Transfer Errors in Chilean University Students

    Directory of Open Access Journals (Sweden)

    Paul Anthony Dissington

    2018-01-01

    Full Text Available Studies of second language learning have revealed a connection between first language transfer and errors in second language production. This paper describes an action research study carried out among Chilean university students studying English as part of their degree programmes. The study focuses on common lexical errors made by Chilean Spanish-speakers due to negative first language transfer and aims to analyse the effects of systematic instruction and practice of this problematic lexis. It is suggested that raising awareness of lexical transfer through focused attention on common transfer errors is valued by students and seems essential for learners to achieve productive mastery.

  11. Systematic Biases in Parameter Estimation of Binary Black-Hole Mergers

    Science.gov (United States)

    Littenberg, Tyson B.; Baker, John G.; Buonanno, Alessandra; Kelly, Bernard J.

    2012-01-01

    Parameter estimation of binary-black-hole merger events in gravitational-wave data relies on matched filtering techniques, which, in turn, depend on accurate model waveforms. Here we characterize the systematic biases introduced in measuring astrophysical parameters of binary black holes by applying the currently most accurate effective-one-body templates to simulated data containing non-spinning numerical-relativity waveforms. For advanced ground-based detectors, we find that the systematic biases are well within the statistical error for realistic signal-to-noise ratios (SNR). These biases grow to be comparable to the statistical errors at high signal-to-noise ratios for ground-based instruments (SNR approximately 50) but never dominate the error budget. At the much larger signal-to-noise ratios expected for space-based detectors, these biases will become large compared to the statistical errors but are small enough (at most a few percent in the black-hole masses) that we expect they should not affect broad astrophysical conclusions that may be drawn from the data.

  12. Seeing the Errors You Feel Enhances Locomotor Performance but Not Learning.

    Science.gov (United States)

    Roemmich, Ryan T; Long, Andrew W; Bastian, Amy J

    2016-10-24

    In human motor learning, it is thought that the more information we have about our errors, the faster we learn. Here, we show that additional error information can lead to improved motor performance without any concomitant improvement in learning. We studied split-belt treadmill walking that drives people to learn a new gait pattern using sensory prediction errors detected by proprioceptive feedback. When we also provided visual error feedback, participants acquired the new walking pattern far more rapidly and showed accelerated restoration of the normal walking pattern during washout. However, when the visual error feedback was removed during either learning or washout, errors reappeared with performance immediately returning to the level expected based on proprioceptive learning alone. These findings support a model with two mechanisms: a dual-rate adaptation process that learns invariantly from sensory prediction error detected by proprioception and a visual-feedback-dependent process that monitors learning and corrects residual errors but shows no learning itself. We show that our voluntary correction model accurately predicted behavior in multiple situations where visual feedback was used to change acquisition of new walking patterns while the underlying learning was unaffected. The computational and behavioral framework proposed here suggests that parallel learning and error correction systems allow us to rapidly satisfy task demands without necessarily committing to learning, as the relative permanence of learning may be inappropriate or inefficient when facing environments that are liable to change. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Lower vs. higher fluid volumes in sepsis-protocol for a systematic review with meta-analysis

    DEFF Research Database (Denmark)

    Meyhoff, T S; Møller, M H; Hjortrup, P B

    2017-01-01

    sequential analysis of randomised clinical trials comparing different strategies to obtain separation in fluid volumes or balances during resuscitation of adult patients with sepsis. We will systematically search the Cochrane Library, MEDLINE, EMBASE, Science Citation Index, BIOSIS and Epistemonikos...... for relevant literature. We will follow the recommendations by the Cochrane Collaboration and the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) statement. The risk of systematic errors (bias) and random errors will be assessed, and the overall quality of evidence will be evaluated...

  14. Energy dependence of fusion evaporation-residue cross sections in the 28Si+12C reaction

    International Nuclear Information System (INIS)

    Vineyard, M.F.; Mateja, J.F.; Beck, C.; Atencio, S.E.; Dennis, L.C.; Frawley, A.D.; Henderson, D.J.; Janssens, R.V.F.; Kemper, K.W.; Kovar, D.G.; Maguire, C.F.; Padalino, S.J.; Prosser, F.W.; Stephans, G.S.F.; Tiede, M.A.; Wilkins, B.D.; Zingarelli, R.A.

    1993-01-01

    Fusion evaporation-residue cross sections for the 28 Si+ 12 C reaction have been measured in the energy range 18≤E c.m. ≤136 MeV using time-of-flight techniques. Velocity distributions of mass-identified reaction products were used to identify evaporation residues and to determine the complete-fusion cross sections at high energies. The data are in agreement with previously established systematics which indicate an entrance-channel mass-asymmetry dependence of the incomplete-fusion evaporation-residue process. The complete-fusion evaporation-residue cross sections and the deduced critical angular momenta are compared with earlier measurements and the predictions of existing models

  15. Piggyback intraocular lens implantation to correct pseudophakic refractive error after segmental multifocal intraocular lens implantation.

    Science.gov (United States)

    Venter, Jan A; Oberholster, Andre; Schallhorn, Steven C; Pelouskova, Martina

    2014-04-01

    To evaluate refractive and visual outcomes of secondary piggyback intraocular lens implantation in patients diagnosed as having residual ametropia following segmental multifocal lens implantation. Data of 80 pseudophakic eyes with ametropia that underwent Sulcoflex aspheric 653L intraocular lens implantation (Rayner Intraocular Lenses Ltd., East Sussex, United Kingdom) to correct residual refractive error were analyzed. All eyes previously had in-the-bag zonal refractive multifocal intraocular lens implantation (Lentis Mplus MF30, models LS-312 and LS-313; Oculentis GmbH, Berlin, Germany) and required residual refractive error correction. Outcome measurements included uncorrected distance visual acuity, corrected distance visual acuity, uncorrected near visual acuity, distance-corrected near visual acuity, manifest refraction, and complications. One-year data are presented in this study. The mean spherical equivalent ranged from -1.75 to +3.25 diopters (D) preoperatively (mean: +0.58 ± 1.15 D) and reduced to -1.25 to +0.50 D (mean: -0.14 ± 0.28 D; P < .01). Postoperatively, 93.8% of eyes were within ±0.50 D and 98.8% were within ±1.00 D of emmetropia. The mean uncorrected distance visual acuity improved significantly from 0.28 ± 0.16 to 0.01 ± 0.10 logMAR and 78.8% of eyes achieved 6/6 (Snellen 20/20) or better postoperatively. The mean uncorrected near visual acuity changed from 0.43 ± 0.28 to 0.19 ± 0.15 logMAR. There was no significant change in corrected distance visual acuity or distance-corrected near visual acuity. No serious intraoperative or postoperative complications requiring secondary intraocular lens removal occurred. Sulcoflex lenses proved to be a predictable and safe option for correcting residual refractive error in patients diagnosed as having pseudophakia. Copyright 2014, SLACK Incorporated.

  16. Random measurement error: Why worry? An example of cardiovascular risk factors.

    Directory of Open Access Journals (Sweden)

    Timo B Brakenhoff

    Full Text Available With the increased use of data not originally recorded for research, such as routine care data (or 'big data', measurement error is bound to become an increasingly relevant problem in medical research. A common view among medical researchers on the influence of random measurement error (i.e. classical measurement error is that its presence leads to some degree of systematic underestimation of studied exposure-outcome relations (i.e. attenuation of the effect estimate. For the common situation where the analysis involves at least one exposure and one confounder, we demonstrate that the direction of effect of random measurement error on the estimated exposure-outcome relations can be difficult to anticipate. Using three example studies on cardiovascular risk factors, we illustrate that random measurement error in the exposure and/or confounder can lead to underestimation as well as overestimation of exposure-outcome relations. We therefore advise medical researchers to refrain from making claims about the direction of effect of measurement error in their manuscripts, unless the appropriate inferential tools are used to study or alleviate the impact of measurement error from the analysis.

  17. IPTV multicast with peer-assisted lossy error control

    Science.gov (United States)

    Li, Zhi; Zhu, Xiaoqing; Begen, Ali C.; Girod, Bernd

    2010-07-01

    Emerging IPTV technology uses source-specific IP multicast to deliver television programs to end-users. To provide reliable IPTV services over the error-prone DSL access networks, a combination of multicast forward error correction (FEC) and unicast retransmissions is employed to mitigate the impulse noises in DSL links. In existing systems, the retransmission function is provided by the Retransmission Servers sitting at the edge of the core network. In this work, we propose an alternative distributed solution where the burden of packet loss repair is partially shifted to the peer IP set-top boxes. Through Peer-Assisted Repair (PAR) protocol, we demonstrate how the packet repairs can be delivered in a timely, reliable and decentralized manner using the combination of server-peer coordination and redundancy of repairs. We also show that this distributed protocol can be seamlessly integrated with an application-layer source-aware error protection mechanism called forward and retransmitted Systematic Lossy Error Protection (SLEP/SLEPr). Simulations show that this joint PARSLEP/ SLEPr framework not only effectively mitigates the bottleneck experienced by the Retransmission Servers, thus greatly enhancing the scalability of the system, but also efficiently improves the resistance to the impulse noise.

  18. The VTTVIS line imaging spectrometer - principles, error sources, and calibration

    DEFF Research Database (Denmark)

    Jørgensen, R.N.

    2002-01-01

    work describing the basic principles, potential error sources, and/or adjustment and calibration procedures. This report fulfils the need for such documentationwith special focus on the system at KVL. The PGP based system has several severe error sources, which should be removed prior any analysis......Hyperspectral imaging with a spatial resolution of a few mm2 has proved to have a great potential within crop and weed classification and also within nutrient diagnostics. A commonly used hyperspectral imaging system is based on the Prism-Grating-Prism(PGP) principles produced by Specim Ltd...... in off-axis transmission efficiencies, diffractionefficiencies, and image distortion have a significant impact on the instrument performance. Procedures removing or minimising these systematic error sources are developed and described for the system build at KVL but can be generalised to other PGP...

  19. Effects of systematic sampling on satellite estimates of deforestation rates

    International Nuclear Information System (INIS)

    Steininger, M K; Godoy, F; Harper, G

    2009-01-01

    Options for satellite monitoring of deforestation rates over large areas include the use of sampling. Sampling may reduce the cost of monitoring but is also a source of error in estimates of areas and rates. A common sampling approach is systematic sampling, in which sample units of a constant size are distributed in some regular manner, such as a grid. The proposed approach for the 2010 Forest Resources Assessment (FRA) of the UN Food and Agriculture Organization (FAO) is a systematic sample of 10 km wide squares at every 1 deg. intersection of latitude and longitude. We assessed the outcome of this and other systematic samples for estimating deforestation at national, sub-national and continental levels. The study is based on digital data on deforestation patterns for the five Amazonian countries outside Brazil plus the Brazilian Amazon. We tested these schemes by varying sample-unit size and frequency. We calculated two estimates of sampling error. First we calculated the standard errors, based on the size, variance and covariance of the samples, and from this calculated the 95% confidence intervals (CI). Second, we calculated the actual errors, based on the difference between the sample-based estimates and the estimates from the full-coverage maps. At the continental level, the 1 deg., 10 km scheme had a CI of 21% and an actual error of 8%. At the national level, this scheme had CIs of 126% for Ecuador and up to 67% for other countries. At this level, increasing sampling density to every 0.25 deg. produced a CI of 32% for Ecuador and CIs of up to 25% for other countries, with only Brazil having a CI of less than 10%. Actual errors were within the limits of the CIs in all but two of the 56 cases. Actual errors were half or less of the CIs in all but eight of these cases. These results indicate that the FRA 2010 should have CIs of smaller than or close to 10% at the continental level. However, systematic sampling at the national level yields large CIs unless the

  20. Detection of residual packets in cocaine body packers: low accuracy of abdominal radiography - a prospective study

    Energy Technology Data Exchange (ETDEWEB)

    Rousset, Pascal; Vadrot, Dominique; Revel, Marie-Pierre [Assistance Publique-Hopitaux de Paris, Paris (France); Hopital Hotel Dieu, Department of Radiology, Paris (France); Universite Paris-Descartes, Paris (France); Chaillot, Pierre-Fleury [Assistance Publique-Hopitaux de Paris, Paris (France); Hopital Hotel Dieu, Department of Radiology, Paris (France); Audureau, Etienne [Assistance Publique-Hopitaux de Paris, Paris (France); Hopital Hotel Dieu, Department of Biostatistics and Epidemiology, Paris (France); Universite Paris-Descartes, Paris (France); Rey-Salmon, Caroline; Becour, Bertrand [Assistance Publique-Hopitaux de Paris, Paris (France); Hopital Hotel Dieu, Department of Forensic, Paris (France); Fitton, Isabelle [Assistance Publique-Hopitaux de Paris, Paris (France); Hopital Europeen Georges Pompidou, Department of Radiology, Paris (France)

    2013-08-15

    To evaluate the accuracy of abdominal radiography (AXR) for the detection of residual cocaine packets by comparison with computed tomography (CT). Over a 1-year period unenhanced CT was systematically performed in addition to AXR for pre-discharge evaluation of cocaine body packers. AXR and CT were interpreted independently by two radiologists blinded to clinical outcome. Patient and packet characteristics were compared between the groups with residual portage and complete decontamination. Among 138 body packers studied, 14 (10 %) had one residual packet identified on pre-discharge CT. On AXR, at least one reader failed to detect the residual packet in 10 (70 %) of these 14 body packers. The sensitivity and specificity of AXR were 28.6 % (95 % CI: 8.4-58.1) and 100.0 % (95 % CI: 97.0-100.0) for reader 1 and 35.7 % (95 % CI: 12.8-64.9) and 97.6 % (95 % CI: 93.1-99.5) for reader 2. There were no significant patient or packet characteristics predictive of residual portage or AXR false negativity. All positive CT results were confirmed by delayed expulsion or surgical findings, while negative results were confirmed by further surveillance. Given the poor performance of AXR, CT should be systematically performed to ensure safe hospital discharge of cocaine body packers. (orig.)

  1. Detection of residual packets in cocaine body packers: low accuracy of abdominal radiography - a prospective study

    International Nuclear Information System (INIS)

    Rousset, Pascal; Vadrot, Dominique; Revel, Marie-Pierre; Chaillot, Pierre-Fleury; Audureau, Etienne; Rey-Salmon, Caroline; Becour, Bertrand; Fitton, Isabelle

    2013-01-01

    To evaluate the accuracy of abdominal radiography (AXR) for the detection of residual cocaine packets by comparison with computed tomography (CT). Over a 1-year period unenhanced CT was systematically performed in addition to AXR for pre-discharge evaluation of cocaine body packers. AXR and CT were interpreted independently by two radiologists blinded to clinical outcome. Patient and packet characteristics were compared between the groups with residual portage and complete decontamination. Among 138 body packers studied, 14 (10 %) had one residual packet identified on pre-discharge CT. On AXR, at least one reader failed to detect the residual packet in 10 (70 %) of these 14 body packers. The sensitivity and specificity of AXR were 28.6 % (95 % CI: 8.4-58.1) and 100.0 % (95 % CI: 97.0-100.0) for reader 1 and 35.7 % (95 % CI: 12.8-64.9) and 97.6 % (95 % CI: 93.1-99.5) for reader 2. There were no significant patient or packet characteristics predictive of residual portage or AXR false negativity. All positive CT results were confirmed by delayed expulsion or surgical findings, while negative results were confirmed by further surveillance. Given the poor performance of AXR, CT should be systematically performed to ensure safe hospital discharge of cocaine body packers. (orig.)

  2. Thermal error analysis and compensation for digital image/volume correlation

    Science.gov (United States)

    Pan, Bing

    2018-02-01

    Digital image/volume correlation (DIC/DVC) rely on the digital images acquired by digital cameras and x-ray CT scanners to extract the motion and deformation of test samples. Regrettably, these imaging devices are unstable optical systems, whose imaging geometry may undergo unavoidable slight and continual changes due to self-heating effect or ambient temperature variations. Changes in imaging geometry lead to both shift and expansion in the recorded 2D or 3D images, and finally manifest as systematic displacement and strain errors in DIC/DVC measurements. Since measurement accuracy is always the most important requirement in various experimental mechanics applications, these thermal-induced errors (referred to as thermal errors) should be given serious consideration in order to achieve high accuracy, reproducible DIC/DVC measurements. In this work, theoretical analyses are first given to understand the origin of thermal errors. Then real experiments are conducted to quantify thermal errors. Three solutions are suggested to mitigate or correct thermal errors. Among these solutions, a reference sample compensation approach is highly recommended because of its easy implementation, high accuracy and in-situ error correction capability. Most of the work has appeared in our previously published papers, thus its originality is not claimed. Instead, this paper aims to give a comprehensive overview and more insights of our work on thermal error analysis and compensation for DIC/DVC measurements.

  3. Monte-Carlo error analysis in x-ray spectral deconvolution

    International Nuclear Information System (INIS)

    Shirk, D.G.; Hoffman, N.M.

    1985-01-01

    The deconvolution of spectral information from sparse x-ray data is a widely encountered problem in data analysis. An often-neglected aspect of this problem is the propagation of random error in the deconvolution process. We have developed a Monte-Carlo approach that enables us to attach error bars to unfolded x-ray spectra. Our Monte-Carlo error analysis has been incorporated into two specific deconvolution techniques: the first is an iterative convergent weight method; the second is a singular-value-decomposition (SVD) method. These two methods were applied to an x-ray spectral deconvolution problem having m channels of observations with n points in energy space. When m is less than n, this problem has no unique solution. We discuss the systematics of nonunique solutions and energy-dependent error bars for both methods. The Monte-Carlo approach has a particular benefit in relation to the SVD method: It allows us to apply the constraint of spectral nonnegativity after the SVD deconvolution rather than before. Consequently, we can identify inconsistencies between different detector channels

  4. Analysis of error patterns in clinical radiotherapy

    International Nuclear Information System (INIS)

    Macklis, Roger; Meier, Tim; Barrett, Patricia; Weinhous, Martin

    1996-01-01

    Purpose: Until very recently, prescription errors and adverse treatment events have rarely been studied or reported systematically in oncology. We wished to understand the spectrum and severity of radiotherapy errors that take place on a day-to-day basis in a high-volume academic practice and to understand the resource needs and quality assurance challenges placed on a department by rapid upswings in contract-based clinical volumes requiring additional operating hours, procedures, and personnel. The goal was to define clinical benchmarks for operating safety and to detect error-prone treatment processes that might function as 'early warning' signs. Methods: A multi-tiered prospective and retrospective system for clinical error detection and classification was developed, with formal analysis of the antecedents and consequences of all deviations from prescribed treatment delivery, no matter how trivial. A department-wide record-and-verify system was operational during this period and was used as one method of treatment verification and error detection. Brachytherapy discrepancies were analyzed separately. Results: During the analysis year, over 2000 patients were treated with over 93,000 individual fields. A total of 59 errors affecting a total of 170 individual treated fields were reported or detected during this period. After review, all of these errors were classified as Level 1 (minor discrepancy with essentially no potential for negative clinical implications). This total treatment delivery error rate (170/93, 332 or 0.18%) is significantly better than corresponding error rates reported for other hospital and oncology treatment services, perhaps reflecting the relatively sophisticated error avoidance and detection procedures used in modern clinical radiation oncology. Error rates were independent of linac model and manufacturer, time of day (normal operating hours versus late evening or early morning) or clinical machine volumes. There was some relationship to

  5. Varying coefficients model with measurement error.

    Science.gov (United States)

    Li, Liang; Greene, Tom

    2008-06-01

    We propose a semiparametric partially varying coefficient model to study the relationship between serum creatinine concentration and the glomerular filtration rate (GFR) among kidney donors and patients with chronic kidney disease. A regression model is used to relate serum creatinine to GFR and demographic factors in which coefficient of GFR is expressed as a function of age to allow its effect to be age dependent. GFR measurements obtained from the clearance of a radioactively labeled isotope are assumed to be a surrogate for the true GFR, with the relationship between measured and true GFR expressed using an additive error model. We use locally corrected score equations to estimate parameters and coefficient functions, and propose an expected generalized cross-validation (EGCV) method to select the kernel bandwidth. The performance of the proposed methods, which avoid distributional assumptions on the true GFR and residuals, is investigated by simulation. Accounting for measurement error using the proposed model reduced apparent inconsistencies in the relationship between serum creatinine and GFR among different clinical data sets derived from kidney donor and chronic kidney disease source populations.

  6. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  7. Learning (from) the errors of a systems biology model.

    Science.gov (United States)

    Engelhardt, Benjamin; Frőhlich, Holger; Kschischo, Maik

    2016-02-11

    Mathematical modelling is a labour intensive process involving several iterations of testing on real data and manual model modifications. In biology, the domain knowledge guiding model development is in many cases itself incomplete and uncertain. A major problem in this context is that biological systems are open. Missed or unknown external influences as well as erroneous interactions in the model could thus lead to severely misleading results. Here we introduce the dynamic elastic-net, a data driven mathematical method which automatically detects such model errors in ordinary differential equation (ODE) models. We demonstrate for real and simulated data, how the dynamic elastic-net approach can be used to automatically (i) reconstruct the error signal, (ii) identify the target variables of model error, and (iii) reconstruct the true system state even for incomplete or preliminary models. Our work provides a systematic computational method facilitating modelling of open biological systems under uncertain knowledge.

  8. A systematic review of patient medication error on self-administering medication at home.

    Science.gov (United States)

    Mira, José Joaquín; Lorenzo, Susana; Guilabert, Mercedes; Navarro, Isabel; Pérez-Jover, Virtudes

    2015-06-01

    Medication errors have been analyzed as a health professionals' responsibility (due to mistakes in prescription, preparation or dispensing). However, sometimes, patients themselves (or their caregivers) make mistakes in the administration of the medication. The epidemiology of patient medication errors (PEs) has been scarcely reviewed in spite of its impact on people, on therapeutic effectiveness and on incremental cost for the health systems. This study reviews and describes the methodological approaches and results of published studies on the frequency, causes and consequences of medication errors committed by patients at home. A review of research articles published between 1990 and 2014 was carried out using MEDLINE, Web-of-Knowledge, Scopus, Tripdatabase and Index Medicus. The frequency of PE was situated between 19 and 59%. The elderly and the preschooler population constituted a higher number of mistakes than others. The most common were: incorrect dosage, forgetting, mixing up medications, failing to recall indications and taking out-of-date or inappropriately stored drugs. The majority of these mistakes have no negative consequences. Health literacy, information and communication and complexity of use of dispensing devices were identified as causes of PEs. Apps and other new technologies offer several opportunities for improving drug safety.

  9. Performance of muon reconstruction including Alignment Position Errors for 2016 Collision Data

    CERN Document Server

    CMS Collaboration

    2016-01-01

    From 2016 Run muon reconstruction is using non-zero Alignment Position Errors to account for the residual uncertainties of muon chambers' positions. Significant improvements are obtained in particular for the startup phase after opening/closing the muon detector. Performance results are presented for real data and MC simulations, related to both the offline reconstruction and the High-Level Trigger.

  10. CALIBRATION ERRORS IN THE CAVITY BEAM POSITION MONITOR SYSTEM AT THE ATF2

    CERN Document Server

    Cullinan, F; Joshi, N; Lyapin, A

    2011-01-01

    It has been shown at the Accelerator Test Facility at KEK, that it is possible to run a system of 37 cavity beam position monitors (BPMs) and achieve high working resolution. However, stability of the calibration constants (position scale and radio frequency (RF) phase) over a three/four week running period is yet to be demonstrated. During the calibration procedure, random beam jitter gives rise to a statistical error in the position scale and slow orbit drift in position and tilt causes systematic errors in both the position scale and RF phase. These errors are dominant and have been evaluated for each BPM. The results are compared with the errors expected after a tested method of beam jitter subtraction has been applied.

  11. How to Avoid Errors in Error Propagation: Prediction Intervals and Confidence Intervals in Forest Biomass

    Science.gov (United States)

    Lilly, P.; Yanai, R. D.; Buckley, H. L.; Case, B. S.; Woollons, R. C.; Holdaway, R. J.; Johnson, J.

    2016-12-01

    Calculations of forest biomass and elemental content require many measurements and models, each contributing uncertainty to the final estimates. While sampling error is commonly reported, based on replicate plots, error due to uncertainty in the regression used to estimate biomass from tree diameter is usually not quantified. Some published estimates of uncertainty due to the regression models have used the uncertainty in the prediction of individuals, ignoring uncertainty in the mean, while others have propagated uncertainty in the mean while ignoring individual variation. Using the simple case of the calcium concentration of sugar maple leaves, we compare the variation among individuals (the standard deviation) to the uncertainty in the mean (the standard error) and illustrate the declining importance in the prediction of individual concentrations as the number of individuals increases. For allometric models, the analogous statistics are the prediction interval (or the residual variation in the model fit) and the confidence interval (describing the uncertainty in the best fit model). The effect of propagating these two sources of error is illustrated using the mass of sugar maple foliage. The uncertainty in individual tree predictions was large for plots with few trees; for plots with 30 trees or more, the uncertainty in individuals was less important than the uncertainty in the mean. Authors of previously published analyses have reanalyzed their data to show the magnitude of these two sources of uncertainty in scales ranging from experimental plots to entire countries. The most correct analysis will take both sources of uncertainty into account, but for practical purposes, country-level reports of uncertainty in carbon stocks, as required by the IPCC, can ignore the uncertainty in individuals. Ignoring the uncertainty in the mean will lead to exaggerated estimates of confidence in estimates of forest biomass and carbon and nutrient contents.

  12. The DiskMass Survey. II. Error Budget

    Science.gov (United States)

    Bershady, Matthew A.; Verheijen, Marc A. W.; Westfall, Kyle B.; Andersen, David R.; Swaters, Rob A.; Martinsson, Thomas

    2010-06-01

    We present a performance analysis of the DiskMass Survey. The survey uses collisionless tracers in the form of disk stars to measure the surface density of spiral disks, to provide an absolute calibration of the stellar mass-to-light ratio (Υ_{*}), and to yield robust estimates of the dark-matter halo density profile in the inner regions of galaxies. We find that a disk inclination range of 25°-35° is optimal for our measurements, consistent with our survey design to select nearly face-on galaxies. Uncertainties in disk scale heights are significant, but can be estimated from radial scale lengths to 25% now, and more precisely in the future. We detail the spectroscopic analysis used to derive line-of-sight velocity dispersions, precise at low surface-brightness, and accurate in the presence of composite stellar populations. Our methods take full advantage of large-grasp integral-field spectroscopy and an extensive library of observed stars. We show that the baryon-to-total mass fraction ({F}_bar) is not a well-defined observational quantity because it is coupled to the halo mass model. This remains true even when the disk mass is known and spatially extended rotation curves are available. In contrast, the fraction of the rotation speed supplied by the disk at 2.2 scale lengths (disk maximality) is a robust observational indicator of the baryonic disk contribution to the potential. We construct the error budget for the key quantities: dynamical disk mass surface density (Σdyn), disk stellar mass-to-light ratio (Υ^disk_{*}), and disk maximality ({F}_{*,max}^disk≡ V^disk_{*,max}/ V_c). Random and systematic errors in these quantities for individual galaxies will be ~25%, while survey precision for sample quartiles are reduced to 10%, largely devoid of systematic errors outside of distance uncertainties.

  13. A New Method to Detect and Correct the Critical Errors and Determine the Software-Reliability in Critical Software-System

    International Nuclear Information System (INIS)

    Krini, Ossmane; Börcsök, Josef

    2012-01-01

    In order to use electronic systems comprising of software and hardware components in safety related and high safety related applications, it is necessary to meet the Marginal risk numbers required by standards and legislative provisions. Existing processes and mathematical models are used to verify the risk numbers. On the hardware side, various accepted mathematical models, processes, and methods exist to provide the required proof. To this day, however, there are no closed models or mathematical procedures known that allow for a dependable prediction of software reliability. This work presents a method that makes a prognosis on the residual critical error number in software. Conventional models lack this ability and right now, there are no methods that forecast critical errors. The new method will show that an estimate of the residual error number of critical errors in software systems is possible by using a combination of prediction models, a ratio of critical errors, and the total error number. Subsequently, the critical expected value-function at any point in time can be derived from the new solution method, provided the detection rate has been calculated using an appropriate estimation method. Also, the presented method makes it possible to make an estimate on the critical failure rate. The approach is modelled on a real process and therefore describes two essential processes - detection and correction process.

  14. Construction of a risk assessment system for chemical residues in agricultural products.

    Science.gov (United States)

    Choi, Shinai; Hong, Jiyeon; Lee, Dayeon; Paik, Minkyoung

    2014-01-01

    Continuous monitoring of chemical residues in agricultural and food products has been performed by various government bodies in South Korea. These bodies have made attempts to systematically manage this information by creating a monitoring database system as well as a system based on these data with which to assess the health risk of chemical residues in agricultural products. Meanwhile, a database system is being constructed consisting of information about monitoring and, following this, a demand for convenience has led to the need for an evaluation tool to be constructed with the data processing system. Also, in order to create a systematic and effective tool for the risk assessment of chemical residues in foods and agricultural products, various evaluation models are being developed, both domestically and abroad. Overseas, systems such as Dietary Exposure Evaluation Model: Food Commodity Intake Database and Cumulative and Aggregate Risk Evaluation System are being used; these use the US Environmental Protection Agency as a focus, while the EU has developed Pesticide Residue Intake Model for assessments of pesticide exposure through food intake. Following this, the National Academy of Agricultural Science (NAAS) created the Agricultural Products Risk Assessment System (APRAS) which supports the use and storage of monitoring information and risk assessments. APRAS efficiently manages the monitoring data produced by NAAS and creates an extraction feature included in the database system. Also, the database system in APRAS consists of a monitoring database system held by the NAAS and food consumption database system. Food consumption data is based on Korea National Health and Nutrition Examination Survey. This system is aimed at exposure and risk assessments for chemical residues in agricultural products with regards to different exposure scenarios.

  15. Probabilistic performance estimators for computational chemistry methods: The empirical cumulative distribution function of absolute errors

    Science.gov (United States)

    Pernot, Pascal; Savin, Andreas

    2018-06-01

    Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.

  16. Residual nilpotence and residual solubility of groups

    International Nuclear Information System (INIS)

    Mikhailov, R V

    2005-01-01

    The properties of the residual nilpotence and the residual solubility of groups are studied. The main objects under investigation are the class of residually nilpotent groups such that each central extension of these groups is also residually nilpotent and the class of residually soluble groups such that each Abelian extension of these groups is residually soluble. Various examples of groups not belonging to these classes are constructed by homological methods and methods of the theory of modules over group rings. Several applications of the theory under consideration are presented and problems concerning the residual nilpotence of one-relator groups are considered.

  17. Double checking medicines: defence against error or contributory factor?

    Science.gov (United States)

    Armitage, Gerry

    2008-08-01

    The double checking of medicines in health care is a contestable procedure. It occupies an obvious position in health care practice and is understood to be an effective defence against medication error but the process is variable and the outcomes have not been exposed to testing. This paper presents an appraisal of the process using data from part of a larger study on the contributory factors in medication errors and their reporting. Previous research studies are reviewed; data are analysed from a review of 991 drug error reports and a subsequent series of 40 in-depth interviews with health professionals in an acute hospital in northern England. The incident reports showed that errors occurred despite double checking but that action taken did not appear to investigate the checking process. Most interview participants (34) talked extensively about double checking but believed the process to be inconsistent. Four key categories were apparent: deference to authority, reduction of responsibility, automatic processing and lack of time. Solutions to the problems were also offered, which are discussed with several recommendations. Double checking medicines should be a selective and systematic procedure informed by key principles and encompassing certain behaviours. Psychological research may be instructive in reducing checking errors but the aviation industry may also have a part to play in increasing error wisdom and reducing risk.

  18. Model parameter-related optimal perturbations and their contributions to El Niño prediction errors

    Science.gov (United States)

    Tao, Ling-Jiang; Gao, Chuan; Zhang, Rong-Hua

    2018-04-01

    Errors in initial conditions and model parameters (MPs) are the main sources that limit the accuracy of ENSO predictions. In addition to exploring the initial error-induced prediction errors, model errors are equally important in determining prediction performance. In this paper, the MP-related optimal errors that can cause prominent error growth in ENSO predictions are investigated using an intermediate coupled model (ICM) and a conditional nonlinear optimal perturbation (CNOP) approach. Two MPs related to the Bjerknes feedback are considered in the CNOP analysis: one involves the SST-surface wind coupling ({α _τ } ), and the other involves the thermocline effect on the SST ({α _{Te}} ). The MP-related optimal perturbations (denoted as CNOP-P) are found uniformly positive and restrained in a small region: the {α _τ } component is mainly concentrated in the central equatorial Pacific, and the {α _{Te}} component is mainly located in the eastern cold tongue region. This kind of CNOP-P enhances the strength of the Bjerknes feedback and induces an El Niño- or La Niña-like error evolution, resulting in an El Niño-like systematic bias in this model. The CNOP-P is also found to play a role in the spring predictability barrier (SPB) for ENSO predictions. Evidently, such error growth is primarily attributed to MP errors in small areas based on the localized distribution of CNOP-P. Further sensitivity experiments firmly indicate that ENSO simulations are sensitive to the representation of SST-surface wind coupling in the central Pacific and to the thermocline effect in the eastern Pacific in the ICM. These results provide guidance and theoretical support for the future improvement in numerical models to reduce the systematic bias and SPB phenomenon in ENSO predictions.

  19. Errors prevention in manufacturing process through integration of Poka Yoke and TRIZ

    Science.gov (United States)

    Helmi, Syed Ahmad; Nordin, Nur Nashwa; Hisjam, Muhammad

    2017-11-01

    Integration of Poka Yoke and TRIZ is a method of solving problems by using a different approach. Poka Yoke is a trial and error method while TRIZ is using a systematic approach. The main purpose of this technique is to get rid of product defects by preventing or correcting errors as soon as possible. Blame the workers for their mistakes is not the best way, but the work process should be reviewed so that every workers behavior or movement may not cause errors. This study is to demonstrate the importance of using both of these methods in which everyone in the industry needs to improve quality, increase productivity and at the same time reducing production cost.

  20. Three-dimensional patient setup errors at different treatment sites measured by the Tomotherapy megavoltage CT

    Energy Technology Data Exchange (ETDEWEB)

    Hui, S.K.; Lusczek, E.; Dusenbery, K. [Univ. of Minnesota Medical School, Minneapolis, MN (United States). Dept. of Therapeutic Radiology - Radiation Oncology; DeFor, T. [Univ. of Minnesota Medical School, Minneapolis, MN (United States). Biostatistics and Informatics Core; Levitt, S. [Univ. of Minnesota Medical School, Minneapolis, MN (United States). Dept. of Therapeutic Radiology - Radiation Oncology; Karolinska Institutet, Stockholm (Sweden). Dept. of Onkol-Patol

    2012-04-15

    Reduction of interfraction setup uncertainty is vital for assuring the accuracy of conformal radiotherapy. We report a systematic study of setup error to assess patients' three-dimensional (3D) localization at various treatment sites. Tomotherapy megavoltage CT (MVCT) images were scanned daily in 259 patients from 2005-2008. We analyzed 6,465 MVCT images to measure setup error for head and neck (H and N), chest/thorax, abdomen, prostate, legs, and total marrow irradiation (TMI). Statistical comparisons of the absolute displacements across sites and time were performed in rotation (R), lateral (x), craniocaudal (y), and vertical (z) directions. The global systematic errors were measured to be less than 3 mm in each direction with increasing order of errors for different sites: H and N, prostate, chest, pelvis, spine, legs, and TMI. The differences in displacements in the x, y, and z directions, and 3D average displacement between treatment sites were significant (p < 0.01). Overall improvement in patient localization with time (after 3-4 treatment fractions) was observed. Large displacement (> 5 mm) was observed in the 75{sup th} percentile of the patient groups for chest, pelvis, legs, and spine in the x and y direction in the second week of the treatment. MVCT imaging is essential for determining 3D setup error and to reduce uncertainty in localization at all anatomical locations. Setup error evaluation should be performed daily for all treatment regions, preferably for all treatment fractions. (orig.)

  1. Analysis of ionospheric structure influences on residual ionospheric errors in GNSS radio occultation bending angles based on ray tracing simulations

    Science.gov (United States)

    Liu, Congliang; Kirchengast, Gottfried; Sun, Yueqiang; Zhang, Kefei; Norman, Robert; Schwaerz, Marc; Bai, Weihua; Du, Qifei; Li, Ying

    2018-04-01

    The Global Navigation Satellite System (GNSS) radio occultation (RO) technique is widely used to observe the atmosphere for applications such as numerical weather prediction and global climate monitoring. The ionosphere is a major error source to RO at upper stratospheric altitudes, and a linear dual-frequency bending angle correction is commonly used to remove the first-order ionospheric effect. However, the higher-order residual ionospheric error (RIE) can still be significant, so it needs to be further mitigated for high-accuracy applications, especially from 35 km altitude upward, where the RIE is most relevant compared to the decreasing magnitude of the atmospheric bending angle. In a previous study we quantified RIEs using an ensemble of about 700 quasi-realistic end-to-end simulated RO events, finding typical RIEs at the 0.1 to 0.5 µrad noise level, but were left with 26 exceptional events with anomalous RIEs at the 1 to 10 µrad level that remained unexplained. In this study, we focused on investigating the causes of the high RIE of these exceptional events, employing detailed along-ray-path analyses of atmospheric and ionospheric refractivities, impact parameter changes, and bending angles and RIEs under asymmetric and symmetric ionospheric structures. We found that the main causes of the high RIEs are a combination of physics-based effects - where asymmetric ionospheric conditions play the primary role, more than the ionization level driven by solar activity - and technical ray tracer effects due to occasions of imperfect smoothness in ionospheric refractivity model derivatives. We also found that along-ray impact parameter variations of more than 10 to 20 m are possible due to ionospheric asymmetries and, depending on prevailing horizontal refractivity gradients, are positive or negative relative to the initial impact parameter at the GNSS transmitter. Furthermore, mesospheric RIEs are found generally higher than upper-stratospheric ones, likely due to

  2. SYSTEMATIC UNCERTAINTIES IN BLACK HOLE MASSES DETERMINED FROM SINGLE-EPOCH SPECTRA

    International Nuclear Information System (INIS)

    Denney, Kelly D.; Peterson, Bradley M.; Dietrich, Matthias; Bentz, Misty C.; Vestergaard, Marianne

    2009-01-01

    We explore the nature of systematic errors that can arise in measurement of black hole masses from single-epoch (SE) spectra of active galactic nuclei (AGNs) by utilizing the many epochs available for NGC 5548 and PG1229+204 from reverberation mapping (RM) databases. In particular, we examine systematics due to AGN variability, contamination due to constant spectral components (i.e., narrow lines and host galaxy flux), data quality (i.e., signal-to-noise ratio (S/N)), and blending of spectral features. We investigate the effect that each of these systematics has on the precision and accuracy of SE masses calculated from two commonly used line width measures by comparing these results to recent RM studies. We calculate masses by characterizing the broad Hβ emission line by both the full width at half maximum and the line dispersion, and demonstrate the importance of removing narrow emission-line components and host starlight. We find that the reliability of line width measurements rapidly decreases for S/N lower than ∼ 10-20 (per pixel), and that fitting the line profiles instead of direct measurement of the data does not mitigate this problem but can, in fact, introduce systematic errors. We also conclude that a full spectral decomposition to deblend the AGN and galaxy spectral features is unnecessary, except to judge the contribution of the host galaxy to the luminosity and to deblend any emission lines that may inhibit accurate line width measurements. Finally, we present an error budget which summarizes the minimum observable uncertainties as well as the amount of additional scatter and/or systematic offset that can be expected from the individual sources of error investigated. In particular, we find that the minimum observable uncertainty in SE mass estimates due to variability is ∼ 20 pixel -1 ) spectra.

  3. Interactive analysis of human error factors in NPP operation events

    International Nuclear Information System (INIS)

    Zhang Li; Zou Yanhua; Huang Weigang

    2010-01-01

    Interactive of human error factors in NPP operation events were introduced, and 645 WANO operation event reports from 1999 to 2008 were analyzed, among which 432 were found relative to human errors. After classifying these errors with the Root Causes or Causal Factors, and then applying SPSS for correlation analysis,we concluded: (1) Personnel work practices are restricted by many factors. Forming a good personnel work practices is a systematic work which need supports in many aspects. (2)Verbal communications,personnel work practices, man-machine interface and written procedures and documents play great roles. They are four interaction factors which often come in bundle. If some improvements need to be made on one of them,synchronous measures are also necessary for the others.(3) Management direction and decision process, which are related to management,have a significant interaction with personnel factors. (authors)

  4. On low-frequency errors of uniformly modulated filtered white-noise models for ground motions

    Science.gov (United States)

    Safak, Erdal; Boore, David M.

    1988-01-01

    Low-frequency errors of a commonly used non-stationary stochastic model (uniformly modulated filtered white-noise model) for earthquake ground motions are investigated. It is shown both analytically and by numerical simulation that uniformly modulated filter white-noise-type models systematically overestimate the spectral response for periods longer than the effective duration of the earthquake, because of the built-in low-frequency errors in the model. The errors, which are significant for low-magnitude short-duration earthquakes, can be eliminated by using the filtered shot-noise-type models (i. e. white noise, modulated by the envelope first, and then filtered).

  5. A Comprehensive Radial Velocity Error Budget for Next Generation Doppler Spectrometers

    Science.gov (United States)

    Halverson, Samuel; Ryan, Terrien; Mahadevan, Suvrath; Roy, Arpita; Bender, Chad; Stefansson, Guomundur Kari; Monson, Andrew; Levi, Eric; Hearty, Fred; Blake, Cullen; hide

    2016-01-01

    We describe a detailed radial velocity error budget for the NASA-NSF Extreme Precision Doppler Spectrometer instrument concept NEID (NN-explore Exoplanet Investigations with Doppler spectroscopy). Such an instrument performance budget is a necessity for both identifying the variety of noise sources currently limiting Doppler measurements, and estimating the achievable performance of next generation exoplanet hunting Doppler spectrometers. For these instruments, no single source of instrumental error is expected to set the overall measurement floor. Rather, the overall instrumental measurement precision is set by the contribution of many individual error sources. We use a combination of numerical simulations, educated estimates based on published materials, extrapolations of physical models, results from laboratory measurements of spectroscopic subsystems, and informed upper limits for a variety of error sources to identify likely sources of systematic error and construct our global instrument performance error budget. While natively focused on the performance of the NEID instrument, this modular performance budget is immediately adaptable to a number of current and future instruments. Such an approach is an important step in charting a path towards improving Doppler measurement precisions to the levels necessary for discovering Earth-like planets.

  6. New error calibration tests for gravity models using subset solutions and independent data - Applied to GEM-T3

    Science.gov (United States)

    Lerch, F. J.; Nerem, R. S.; Chinn, D. S.; Chan, J. C.; Patel, G. B.; Klosko, S. M.

    1993-01-01

    A new method has been developed to provide a direct test of the error calibrations of gravity models based on actual satellite observations. The basic approach projects the error estimates of the gravity model parameters onto satellite observations, and the results of these projections are then compared with data residual computed from the orbital fits. To allow specific testing of the gravity error calibrations, subset solutions are computed based on the data set and data weighting of the gravity model. The approach is demonstrated using GEM-T3 to show that the gravity error estimates are well calibrated and that reliable predictions of orbit accuracies can be achieved for independent orbits.

  7. Identification and Assessment of Human Errors in Postgraduate Endodontic Students of Kerman University of Medical Sciences by Using the SHERPA Method

    Directory of Open Access Journals (Sweden)

    Saman Dastaran

    2016-03-01

    Full Text Available Introduction: Human errors are the cause of many accidents, including industrial and medical, therefore finding out an approach for identifying and reducing them is very important. Since no study has been done about human errors in the dental field, this study aimed to identify and assess human errors in postgraduate endodontic students of Kerman University of Medical Sciences by using the SHERPA Method. Methods: This cross-sectional study was performed during year 2014. Data was collected using task observation and interviewing postgraduate endodontic students. Overall, 10 critical tasks, which were most likely to cause harm to patients were determined. Next, Hierarchical Task Analysis (HTA was conducted and human errors in each task were identified by the Systematic Human Error Reduction Prediction Approach (SHERPA technique worksheets. Results: After analyzing the SHERPA worksheets, 90 human errors were identified including (67.7% action errors, (13.3% checking errors, (8.8% selection errors, (5.5% retrieval errors and (4.4% communication errors. As a result, most of them were action errors and less of them were communication errors. Conclusions: The results of the study showed that the highest percentage of errors and the highest level of risk were associated with action errors, therefore, to reduce the occurrence of such errors and limit their consequences, control measures including periodical training of work procedures, providing work check-lists, development of guidelines and establishment of a systematic and standardized reporting system, should be put in place. Regarding the results of this study, the control of recovery errors with the highest percentage of undesirable risk and action errors with the highest frequency of errors should be in the priority of control

  8. Setup errors and effectiveness of Optical Laser 3D Surface imaging system (Sentinel) in postoperative radiotherapy of breast cancer.

    Science.gov (United States)

    Wei, Xiaobo; Liu, Mengjiao; Ding, Yun; Li, Qilin; Cheng, Changhai; Zong, Xian; Yin, Wenming; Chen, Jie; Gu, Wendong

    2018-05-08

    Breast-conserving surgery (BCS) plus postoperative radiotherapy has become the standard treatment for early-stage breast cancer. The aim of this study was to compare the setup accuracy of optical surface imaging by the Sentinel system with cone-beam computerized tomography (CBCT) imaging currently used in our clinic for patients received BCS. Two optical surface scans were acquired before and immediately after couch movement correction. The correlation between the setup errors as determined by the initial optical surface scan and CBCT was analyzed. The deviation of the second optical surface scan from the reference planning CT was considered an estimate for the residual errors for the new method for patient setup correction. The consequences in terms for necessary planning target volume (PTV) margins for treatment sessions without setup correction applied. We analyzed 145 scans in 27 patients treated for early stage breast cancer. The setup errors of skin marker based patient alignment by optical surface scan and CBCT were correlated, and the residual setup errors as determined by the optical surface scan after couch movement correction were reduced. Optical surface imaging provides a convenient method for improving the setup accuracy for breast cancer patient without unnecessary imaging dose.

  9. Prevention of prescription errors by computerized, on-line, individual patient related surveillance of drug order entry.

    Science.gov (United States)

    Oliven, A; Zalman, D; Shilankov, Y; Yeshurun, D; Odeh, M

    2002-01-01

    Computerized prescription of drugs is expected to reduce the number of many preventable drug ordering errors. In the present study we evaluated the usefullness of a computerized drug order entry (CDOE) system in reducing prescription errors. A department of internal medicine using a comprehensive CDOE, which included also patient-related drug-laboratory, drug-disease and drug-allergy on-line surveillance was compared to a similar department in which drug orders were handwritten. CDOE reduced prescription errors to 25-35%. The causes of errors remained similar, and most errors, on both departments, were associated with abnormal renal function and electrolyte balance. Residual errors remaining on the CDOE-using department were due to handwriting on the typed order, failure to feed patients' diseases, and system failures. The use of CDOE was associated with a significant reduction in mean hospital stay and in the number of changes performed in the prescription. The findings of this study both quantity the impact of comprehensive CDOE on prescription errors and delineate the causes for remaining errors.

  10. Systematic Uncertainties in Black Hole Masses Determined from Single Epoch Spectra

    DEFF Research Database (Denmark)

    Denney, Kelly D.; Peterson, Bradley M.; Dietrich, Matthias

    2008-01-01

    We explore the nature of systematic errors that can arise in measurement of black hole masses from single-epoch spectra of active galactic nuclei (AGNs) by utilizing the many epochs available for NGC 5548 and PG1229+204 from reverberation mapping databases. In particular, we examine systematics due...

  11. Notice of Violation of IEEE Publication PrinciplesJoint Redundant Residue Number Systems and Module Isolation for Mitigating Single Event Multiple Bit Upsets in Datapath

    Science.gov (United States)

    Li, Lei; Hu, Jianhao

    2010-12-01

    Notice of Violation of IEEE Publication Principles"Joint Redundant Residue Number Systems and Module Isolation for Mitigating Single Event Multiple Bit Upsets in Datapath"by Lei Li and Jianhao Hu,in the IEEE Transactions on Nuclear Science, vol.57, no.6, Dec. 2010, pp. 3779-3786After careful and considered review of the content and authorship of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.This paper contains substantial duplication of original text from the paper cited below. The original text was copied without attribution (including appropriate references to the original author(s) and/or paper title) and without permission.Due to the nature of this violation, reasonable effort should be made to remove all past references to this paper, and future references should be made to the following articles:"Multiple Error Detection and Correction Based on Redundant Residue Number Systems"by Vik Tor Goh and M.U. Siddiqi,in the IEEE Transactions on Communications, vol.56, no.3, March 2008, pp.325-330"A Coding Theory Approach to Error Control in Redundant Residue Number Systems. I: Theory and Single Error Correction"by H. Krishna, K-Y. Lin, and J-D. Sun, in the IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, vol.39, no.1, Jan 1992, pp.8-17In this paper, we propose a joint scheme which combines redundant residue number systems (RRNS) with module isolation (MI) for mitigating single event multiple bit upsets (SEMBUs) in datapath. The proposed hardening scheme employs redundant residues to improve the fault tolerance for datapath and module spacings to guarantee that SEMBUs caused by charge sharing do not propagate among the operation channels of different moduli. The features of RRNS, such as independence, parallel and error correction, are exploited to establish the radiation hardening architecture for the datapath in radiation environments. In the proposed

  12. TU-G-BRD-08: In-Vivo EPID Dosimetry: Quantifying the Detectability of Four Classes of Errors

    Energy Technology Data Exchange (ETDEWEB)

    Ford, E; Phillips, M; Bojechko, C [University of Washington, Seattle, WA (United States)

    2015-06-15

    Purpose: EPID dosimetry is an emerging method for treatment verification and QA. Given that the in-vivo EPID technique is in clinical use at some centers, we investigate the sensitivity and specificity for detecting different classes of errors. We assess the impact of these errors using dose volume histogram endpoints. Though data exist for EPID dosimetry performed pre-treatment, this is the first study quantifying its effectiveness when used during patient treatment (in-vivo). Methods: We analyzed 17 patients; EPID images of the exit dose were acquired and used to reconstruct the planar dose at isocenter. This dose was compared to the TPS dose using a 3%/3mm gamma criteria. To simulate errors, modifications were made to treatment plans using four possible classes of error: 1) patient misalignment, 2) changes in patient body habitus, 3) machine output changes and 4) MLC misalignments. Each error was applied with varying magnitudes. To assess the detectability of the error, the area under a ROC curve (AUC) was analyzed. The AUC was compared to changes in D99 of the PTV introduced by the simulated error. Results: For systematic changes in the MLC leaves, changes in the machine output and patient habitus, the AUC varied from 0.78–0.97 scaling with the magnitude of the error. The optimal gamma threshold as determined by the ROC curve varied between 84–92%. There was little diagnostic power in detecting random MLC leaf errors and patient shifts (AUC 0.52–0.74). Some errors with weak detectability had large changes in D99. Conclusion: These data demonstrate the ability of EPID-based in-vivo dosimetry in detecting variations in patient habitus and errors related to machine parameters such as systematic MLC misalignments and machine output changes. There was no correlation found between the detectability of the error using the gamma pass rate, ROC analysis and the impact on the dose volume histogram. Funded by grant R18HS022244 from AHRQ.

  13. Sun drying of residual annatto seed powder

    Directory of Open Access Journals (Sweden)

    Dyego da Costa Santos

    2015-01-01

    Full Text Available Residual annatto seeds are waste from bixin extraction in the food, pharmaceutical and cosmetic industries. Most of this by-product is currently discarded; however, the use of these seeds in human foods through the elaboration of powder added to other commercial powders is seen as a viable option. This study aimed at drying of residual annatto powder, with and without the oil layer derived from the industrial extraction of bixin, fitting different mathematical models to experimental data and calculating the effective moisture diffusivity of the samples. Powder containing oil exhibited the shortest drying time, highest drying rate (≈ 5.0 kg kg-1 min-1 and highest effective diffusivity (6.49 × 10-12 m2 s-1. All mathematical models assessed were a suitable representation of the drying kinetics of powders with and without oil, with R2 above 0.99 and root mean square error values lower than 1.0.

  14. Error begat error: design error analysis and prevention in social infrastructure projects.

    Science.gov (United States)

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.

  15. Using total quality management approach to improve patient safety by preventing medication error incidences*.

    Science.gov (United States)

    Yousef, Nadin; Yousef, Farah

    2017-09-04

    Whereas one of the predominant causes of medication errors is a drug administration error, a previous study related to our investigations and reviews estimated that the incidences of medication errors constituted 6.7 out of 100 administrated medication doses. Therefore, we aimed by using six sigma approach to propose a way that reduces these errors to become less than 1 out of 100 administrated medication doses by improving healthcare professional education and clearer handwritten prescriptions. The study was held in a General Government Hospital. First, we systematically studied the current medication use process. Second, we used six sigma approach by utilizing the five-step DMAIC process (Define, Measure, Analyze, Implement, Control) to find out the real reasons behind such errors. This was to figure out a useful solution to avoid medication error incidences in daily healthcare professional practice. Data sheet was used in Data tool and Pareto diagrams were used in Analyzing tool. In our investigation, we reached out the real cause behind administrated medication errors. As Pareto diagrams used in our study showed that the fault percentage in administrated phase was 24.8%, while the percentage of errors related to prescribing phase was 42.8%, 1.7 folds. This means that the mistakes in prescribing phase, especially because of the poor handwritten prescriptions whose percentage in this phase was 17.6%, are responsible for the consequent) mistakes in this treatment process later on. Therefore, we proposed in this study an effective low cost strategy based on the behavior of healthcare workers as Guideline Recommendations to be followed by the physicians. This method can be a prior caution to decrease errors in prescribing phase which may lead to decrease the administrated medication error incidences to less than 1%. This improvement way of behavior can be efficient to improve hand written prescriptions and decrease the consequent errors related to administrated

  16. Discontinuous Galerkin methods and a posteriori error analysis for heterogenous diffusion problems

    International Nuclear Information System (INIS)

    Stephansen, A.F.

    2007-12-01

    In this thesis we analyse a discontinuous Galerkin (DG) method and two computable a posteriori error estimators for the linear and stationary advection-diffusion-reaction equation with heterogeneous diffusion. The DG method considered, the SWIP method, is a variation of the Symmetric Interior Penalty Galerkin method. The difference is that the SWIP method uses weighted averages with weights that depend on the diffusion. The a priori analysis shows optimal convergence with respect to mesh-size and robustness with respect to heterogeneous diffusion, which is confirmed by numerical tests. Both a posteriori error estimators are of the residual type and control the energy (semi-)norm of the error. Local lower bounds are obtained showing that almost all indicators are independent of heterogeneities. The exception is for the non-conforming part of the error, which has been evaluated using the Oswald interpolator. The second error estimator is sharper in its estimate with respect to the first one, but it is slightly more costly. This estimator is based on the construction of an H(div)-conforming Raviart-Thomas-Nedelec flux using the conservativeness of DG methods. Numerical results show that both estimators can be used for mesh-adaptation. (author)

  17. Local setup errors in image-guided radiotherapy for head and neck cancer patients immobilized with a custom-made device.

    Science.gov (United States)

    Giske, Kristina; Stoiber, Eva M; Schwarz, Michael; Stoll, Armin; Muenter, Marc W; Timke, Carmen; Roeder, Falk; Debus, Juergen; Huber, Peter E; Thieke, Christian; Bendl, Rolf

    2011-06-01

    To evaluate the local positioning uncertainties during fractionated radiotherapy of head-and-neck cancer patients immobilized using a custom-made fixation device and discuss the effect of possible patient correction strategies for these uncertainties. A total of 45 head-and-neck patients underwent regular control computed tomography scanning using an in-room computed tomography scanner. The local and global positioning variations of all patients were evaluated by applying a rigid registration algorithm. One bounding box around the complete target volume and nine local registration boxes containing relevant anatomic structures were introduced. The resulting uncertainties for a stereotactic setup and the deformations referenced to one anatomic local registration box were determined. Local deformations of the patients immobilized using our custom-made device were compared with previously published results. Several patient positioning correction strategies were simulated, and the residual local uncertainties were calculated. The patient anatomy in the stereotactic setup showed local systematic positioning deviations of 1-4 mm. The deformations referenced to a particular anatomic local registration box were similar to the reported deformations assessed from patients immobilized with commercially available Aquaplast masks. A global correction, including the rotational error compensation, decreased the remaining local translational errors. Depending on the chosen patient positioning strategy, the remaining local uncertainties varied considerably. Local deformations in head-and-neck patients occur even if an elaborate, custom-made patient fixation method is used. A rotational error correction decreased the required margins considerably. None of the considered correction strategies achieved perfect alignment. Therefore, weighting of anatomic subregions to obtain the optimal correction vector should be investigated in the future. Copyright © 2011 Elsevier Inc. All rights

  18. Type I and type II residual stress in iron meteorites determined by neutron diffraction measurements

    Science.gov (United States)

    Caporali, Stefano; Pratesi, Giovanni; Kabra, Saurabh; Grazzi, Francesco

    2018-04-01

    In this work we present a preliminary investigation by means of neutron diffraction experiment to determine the residual stress state in three different iron meteorites (Chinga, Sikhote Alin and Nantan). Because of the very peculiar microstructural characteristic of this class of samples, all the systematic effects related to the measuring procedure - such as crystallite size and composition - were taken into account and a clear differentiation in the statistical distribution of residual stress in coarse and fine grained meteorites were highlighted. Moreover, the residual stress state was statistically analysed in three orthogonal directions finding evidence of the existence of both type I and type II residual stress components. Finally, the application of von Mises approach allowed to determine the distribution of type II stress.

  19. Total error shift patterns for daily CT on rails image-guided radiotherapy to the prostate bed

    Directory of Open Access Journals (Sweden)

    Mota Helvecio C

    2011-10-01

    Full Text Available Abstract Background To evaluate the daily total error shift patterns on post-prostatectomy patients undergoing image guided radiotherapy (IGRT with a diagnostic quality computer tomography (CT on rails system. Methods A total of 17 consecutive post-prostatectomy patients receiving adjuvant or salvage IMRT using CT-on-rails IGRT were analyzed. The prostate bed's daily total error shifts were evaluated for a total of 661 CT scans. Results In the right-left, cranial-caudal, and posterior-anterior directions, 11.5%, 9.2%, and 6.5% of the 661 scans required no position adjustments; 75.3%, 66.1%, and 56.8% required a shift of 1 - 5 mm; 11.5%, 20.9%, and 31.2% required a shift of 6 - 10 mm; and 1.7%, 3.8%, and 5.5% required a shift of more than 10 mm, respectively. There was evidence of correlation between the x and y, x and z, and y and z axes in 3, 3, and 3 of 17 patients, respectively. Univariate (ANOVA analysis showed that the total error pattern was random in the x, y, and z axis for 10, 5, and 2 of 17 patients, respectively, and systematic for the rest. Multivariate (MANOVA analysis showed that the (x,y, (x,z, (y,z, and (x, y, z total error pattern was random in 5, 1, 1, and 1 of 17 patients, respectively, and systematic for the rest. Conclusions The overall daily total error shift pattern for these 17 patients simulated with an empty bladder, and treated with CT on rails IGRT was predominantly systematic. Despite this, the temporal vector trends showed complex behaviors and unpredictable changes in magnitude and direction. These findings highlight the importance of using daily IGRT in post-prostatectomy patients.

  20. Effects of Measurement Error on the Output Gap in Japan

    OpenAIRE

    Koichiro Kamada; Kazuto Masuda

    2000-01-01

    Potential output is the largest amount of products that can be produced by fully utilizing available labor and capital stock; the output gap is defined as the discrepancy between actual and potential output. If data on production factors contain measurement errors, total factor productivity (TFP) cannot be estimated accurately from the Solow residual(i.e., the portion of output that is not attributable to labor and capital inputs). This may give rise to distortions in the estimation of potent...

  1. The role of errors in the measurements performed at the reprocessing plant head-end for material accountancy purposes

    International Nuclear Information System (INIS)

    Foggi, C.; Liebetrau, A.M.; Petraglia, E.

    1999-01-01

    One of the most common procedures used in determining the amount of nuclear material contained in solutions consists of first measuring the volume and the density of the solution, and then determining the concentrations of this material. This presentation will focus on errors generated at the process lime in the measurement of volume and density. These errors and their associated uncertainties can be grouped into distinct categories depending on their origin: those attributable to measuring instruments; those attributable to operational procedures; variability in measurement conditions; errors in the analysis and interpretation of results. Possible errors sources, their relative magnitudes, and an error propagation rationale are discussed, with emphasis placed on bases and errors of the last three types called systematic errors [ru

  2. High energy hadron-induced errors in memory chips

    Energy Technology Data Exchange (ETDEWEB)

    Peterson, R.J. [University of Colorado, Boulder, CO (United States)

    2001-09-01

    We have measured probabilities for proton, neutron and pion beams from accelerators to induce temporary or soft errors in a wide range of modern 16 Mb and 64 Mb dRAM memory chips, typical of those used in aircraft electronics. Relations among the cross sections for these particles are deduced, and failure rates for aircraft avionics due to cosmic rays are evaluated. Measurement of alpha pha particle yields from pions on aluminum, as a surrogate for silicon, indicate that these reaction products are the proximate cause of the charge deposition resulting in errors. Heavy ions can cause damage to solar panels and other components in satellites above the atmosphere, by the heavy ionization trails they leave. However, at the earth's surface or at aircraft altitude it is known that cosmic rays, other than heavy ions, can cause soft errors in memory circuit components. Soft errors are those confusions between ones and zeroes that cause wrong contents to be stored in the memory, but without causing permanent damage to the circuit. As modern aircraft rely increasingly upon computerized and automated systems, these soft errors are important threats to safety. Protons, neutrons and pions resulting from high energy cosmic ray bombardment of the atmosphere pervade our environment. These particles do not induce damage directly by their ionization loss, but rather by reactions in the materials of the microcircuits. We have measured many cross sections for soft error upsets (SEU) in a broad range of commercial 16 Mb and 64 Mb dRAMs with accelerator beams. Here we define {sigma} SEU = induced errors/number of sample bits x particles/cm{sup 2}. We compare {sigma} SEU to find relations among results for these beams, and relations to reaction cross sections in order to systematize effects. We have modelled cosmic ray effects upon the components we have studied. (Author)

  3. High energy hadron-induced errors in memory chips

    International Nuclear Information System (INIS)

    Peterson, R.J.

    2001-01-01

    We have measured probabilities for proton, neutron and pion beams from accelerators to induce temporary or soft errors in a wide range of modern 16 Mb and 64 Mb dRAM memory chips, typical of those used in aircraft electronics. Relations among the cross sections for these particles are deduced, and failure rates for aircraft avionics due to cosmic rays are evaluated. Measurement of alpha pha particle yields from pions on aluminum, as a surrogate for silicon, indicate that these reaction products are the proximate cause of the charge deposition resulting in errors. Heavy ions can cause damage to solar panels and other components in satellites above the atmosphere, by the heavy ionization trails they leave. However, at the earth's surface or at aircraft altitude it is known that cosmic rays, other than heavy ions, can cause soft errors in memory circuit components. Soft errors are those confusions between ones and zeroes that cause wrong contents to be stored in the memory, but without causing permanent damage to the circuit. As modern aircraft rely increasingly upon computerized and automated systems, these soft errors are important threats to safety. Protons, neutrons and pions resulting from high energy cosmic ray bombardment of the atmosphere pervade our environment. These particles do not induce damage directly by their ionization loss, but rather by reactions in the materials of the microcircuits. We have measured many cross sections for soft error upsets (SEU) in a broad range of commercial 16 Mb and 64 Mb dRAMs with accelerator beams. Here we define σ SEU = induced errors/number of sample bits x particles/cm 2 . We compare σ SEU to find relations among results for these beams, and relations to reaction cross sections in order to systematize effects. We have modelled cosmic ray effects upon the components we have studied. (Author)

  4. Calibration of the century, apsim and ndicea models of decomposition and n mineralization of plant residues in the humid tropics

    Directory of Open Access Journals (Sweden)

    Alexandre Ferreira do Nascimento

    2011-06-01

    Full Text Available The aim of this study was to calibrate the CENTURY, APSIM and NDICEA simulation models for estimating decomposition and N mineralization rates of plant organic materials (Arachis pintoi, Calopogonium mucunoides, Stizolobium aterrimum, Stylosanthes guyanensis for 360 days in the Atlantic rainforest bioma of Brazil. The models´ default settings overestimated the decomposition and N-mineralization of plant residues, underlining the fact that the models must be calibrated for use under tropical conditions. For example, the APSIM model simulated the decomposition of the Stizolobium aterrimum and Calopogonium mucunoides residues with an error rate of 37.62 and 48.23 %, respectively, by comparison with the observed data, and was the least accurate model in the absence of calibration. At the default settings, the NDICEA model produced an error rate of 10.46 and 14.46 % and the CENTURY model, 21.42 and 31.84 %, respectively, for Stizolobium aterrimum and Calopogonium mucunoides residue decomposition. After calibration, the models showed a high level of accuracy in estimating decomposition and N- mineralization, with an error rate of less than 20 %. The calibrated NDICEA model showed the highest level of accuracy, followed by the APSIM and CENTURY. All models performed poorly in the first few months of decomposition and N-mineralization, indicating the need of an additional parameter for initial microorganism growth on the residues that would take the effect of leaching due to rainfall into account.

  5. Residualization Rates of Near Infrared Dyes for the Rational Design of Molecular Imaging Agents

    Science.gov (United States)

    Cilliers, Cornelius; Liao, Jianshan; Atangcho, Lydia; Thurber, Greg M.

    2016-01-01

    Purpose Near infrared (NIR) fluorescence imaging is widely used for tracking antibodies and biomolecules in vivo. Clinical and preclinical applications include intraoperative imaging, tracking therapeutics, and fluorescent labeling as a surrogate for subsequent radiolabeling. Despite their extensive use, one of the fundamental properties of NIR dyes, the residualization rate within cells following internalization, has not been systematically studied. This rate is required for the rational design of probes and proper interpretation of in vivo results. Procedures In this brief report, we measure the cellular residualization rate of eight commonly used dyes encompassing three core structures (cyanine, BODIPY, and oxazine/thiazine/carbopyronin). Results We identify residualizing (half-life > 24 hrs) and non-residualizing dyes (half-life < 24 hrs) in both the far red (~650-680 nm) and near infrared (~740-800 nm) regions. Conclusions This data will allow researchers to independently and rationally select the wavelength and residualizing nature of dyes for molecular imaging agent design. PMID:25869081

  6. Residualization Rates of Near-Infrared Dyes for the Rational Design of Molecular Imaging Agents.

    Science.gov (United States)

    Cilliers, Cornelius; Liao, Jianshan; Atangcho, Lydia; Thurber, Greg M

    2015-12-01

    Near-infrared (NIR) fluorescence imaging is widely used for tracking antibodies and biomolecules in vivo. Clinical and preclinical applications include intraoperative imaging, tracking therapeutics, and fluorescent labeling as a surrogate for subsequent radiolabeling. Despite their extensive use, one of the fundamental properties of NIR dyes, the residualization rate within cells following internalization, has not been systematically studied. This rate is required for the rational design of probes and proper interpretation of in vivo results. In this brief report, we measure the cellular residualization rate of eight commonly used dyes encompassing three core structures (cyanine, boron-dipyrromethene (BODIPY), and oxazine/thiazine/carbopyronin). We identify residualizing (half-life >24 h) and non-residualizing (half-life <24 h) dyes in both the far-red (~650-680 nm) and near-infrared (~740-800 nm) regions. This data will allow researchers to independently and rationally select the wavelength and residualizing nature of dyes for molecular imaging agent design.

  7. Estimates of Single Sensor Error Statistics for the MODIS Matchup Database Using Machine Learning

    Science.gov (United States)

    Kumar, C.; Podesta, G. P.; Minnett, P. J.; Kilpatrick, K. A.

    2017-12-01

    Sea surface temperature (SST) is a fundamental quantity for understanding weather and climate dynamics. Although sensors aboard satellites provide global and repeated SST coverage, a characterization of SST precision and bias is necessary for determining the suitability of SST retrievals in various applications. Guidance on how to derive meaningful error estimates is still being developed. Previous methods estimated retrieval uncertainty based on geophysical factors, e.g. season or "wet" and "dry" atmospheres, but the discrete nature of these bins led to spatial discontinuities in SST maps. Recently, a new approach clustered retrievals based on the terms (excluding offset) in the statistical algorithm used to estimate SST. This approach resulted in over 600 clusters - too many to understand the geophysical conditions that influence retrieval error. Using MODIS and buoy SST matchups (2002 - 2016), we use machine learning algorithms (recursive and conditional trees, random forests) to gain insight into geophysical conditions leading to the different signs and magnitudes of MODIS SST residuals (satellite SSTs minus buoy SSTs). MODIS retrievals were first split into three categories: 0.4 C. These categories are heavily unbalanced, with residuals > 0.4 C being much less frequent. Performance of classification algorithms is affected by imbalance, thus we tested various rebalancing algorithms (oversampling, undersampling, combinations of the two). We consider multiple features for the decision tree algorithms: regressors from the MODIS SST algorithm, proxies for temperature deficit, and spatial homogeneity of brightness temperatures (BTs), e.g., the range of 11 μm BTs inside a 25 km2 area centered on the buoy location. These features and a rebalancing of classes led to an 81.9% accuracy when classifying SST retrievals into the cloud contamination still is one of the causes leading to negative SST residuals. Precision and accuracy of error estimates from our decision tree

  8. Quadratic residues and non-residues selected topics

    CERN Document Server

    Wright, Steve

    2016-01-01

    This book offers an account of the classical theory of quadratic residues and non-residues with the goal of using that theory as a lens through which to view the development of some of the fundamental methods employed in modern elementary, algebraic, and analytic number theory. The first three chapters present some basic facts and the history of quadratic residues and non-residues and discuss various proofs of the Law of Quadratic Reciprosity in depth, with an emphasis on the six proofs that Gauss published. The remaining seven chapters explore some interesting applications of the Law of Quadratic Reciprocity, prove some results concerning the distribution and arithmetic structure of quadratic residues and non-residues, provide a detailed proof of Dirichlet’s Class-Number Formula, and discuss the question of whether quadratic residues are randomly distributed. The text is a valuable resource for graduate and advanced undergraduate students as well as for mathematicians interested in number theory.

  9. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    Science.gov (United States)

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Large-scale simulations of error-prone quantum computation devices

    International Nuclear Information System (INIS)

    Trieu, Doan Binh

    2009-01-01

    The theoretical concepts of quantum computation in the idealized and undisturbed case are well understood. However, in practice, all quantum computation devices do suffer from decoherence effects as well as from operational imprecisions. This work assesses the power of error-prone quantum computation devices using large-scale numerical simulations on parallel supercomputers. We present the Juelich Massively Parallel Ideal Quantum Computer Simulator (JUMPIQCS), that simulates a generic quantum computer on gate level. It comprises an error model for decoherence and operational errors. The robustness of various algorithms in the presence of noise has been analyzed. The simulation results show that for large system sizes and long computations it is imperative to actively correct errors by means of quantum error correction. We implemented the 5-, 7-, and 9-qubit quantum error correction codes. Our simulations confirm that using error-prone correction circuits with non-fault-tolerant quantum error correction will always fail, because more errors are introduced than being corrected. Fault-tolerant methods can overcome this problem, provided that the single qubit error rate is below a certain threshold. We incorporated fault-tolerant quantum error correction techniques into JUMPIQCS using Steane's 7-qubit code and determined this threshold numerically. Using the depolarizing channel as the source of decoherence, we find a threshold error rate of (5.2±0.2) x 10 -6 . For Gaussian distributed operational over-rotations the threshold lies at a standard deviation of 0.0431±0.0002. We can conclude that quantum error correction is especially well suited for the correction of operational imprecisions and systematic over-rotations. For realistic simulations of specific quantum computation devices we need to extend the generic model to dynamic simulations, i.e. time-dependent Hamiltonian simulations of realistic hardware models. We focus on today's most advanced technology, i

  11. Study of systematic errors in the determination of total Hg levels in the range -5% in inorganic and organic matrices with two reliable spectrometrical determination procedures

    International Nuclear Information System (INIS)

    Kaiser, G.; Goetz, D.; Toelg, G.; Max-Planck-Institut fuer Metallforschung, Stuttgart; Knapp, G.; Maichin, B.; Spitzy, H.

    1978-01-01

    In the determiniation of Hg at ng/g and pg/g levels systematic errors are due to faults in the analytical methods such as intake, preparation and decomposition of a sample. The sources of these errors have been studied both with 203 Hg-radiotracer techniques and two multi-stage procedures developed for the determiniation of trace levels. The emission spectrometrie (OES-MIP) procedure includes incineration of the sample in a microwave induced oxygen plasma (MIP), the isolation and enrichment on a gold absorbent and its excitation in an argon plasma (MIP). The emitted Hg-radiation (253,7 nm) is evaluated photometrically with a semiconductor element. The detection limit of the OES-MIP procedure was found to be 0,01 ng, the coefficient of variation 5% for 1 ng Hg. The second procedure combines a semi-automated wet digestion method (HCLO 3 /HNO 3 ) with a reduction-aeration (ascorbic acid/SnCl 2 ), and the flameless atomic absorption technique (253,7 nm). The detection limit of this procedure was found to be 0,5 ng, the coefficient of variation 5% for 5 ng Hg. (orig.) [de

  12. Incorporation of systematic uncertainties in statistical decision rules

    International Nuclear Information System (INIS)

    Wichers, V.A.

    1994-02-01

    The influence of systematic uncertainties on statistical hypothesis testing is an underexposed subject. Systematic uncertainties cannot be incorporated in hypothesis tests, but they deteriorate the performance of these tests. A wrong treatment of systematic uncertainties in verification applications in safeguards leads to false assessment of the strength of the safeguards measure, and thus undermines the safeguards system. The effects of systematic uncertainties on decision errors in hypothesis testing are analyzed quantitatively for an example from the safeguards practice. (LEU-HEU verification of UF 6 enrichment in centrifuge enrichment plants). It is found that the only proper way to tackle systematic uncertainties is reduction to sufficiently low levels; criteria for these are proposed. Although conclusions were obtained from study of a single practical application, it is believed that they hold generally: for all sources of systematic uncertainties, all statistical decision rules, and all applications. (orig./HP)

  13. A Posteriori Error Estimation for Finite Element Methods and Iterative Linear Solvers

    Energy Technology Data Exchange (ETDEWEB)

    Melboe, Hallgeir

    2001-10-01

    This thesis addresses a posteriori error estimation for finite element methods and iterative linear solvers. Adaptive finite element methods have gained a lot of popularity over the last decades due to their ability to produce accurate results with limited computer power. In these methods a posteriori error estimates play an essential role. Not only do they give information about how large the total error is, they also indicate which parts of the computational domain should be given a more sophisticated treatment in order to reduce the error. A posteriori error estimates are traditionally aimed at estimating the global error, but more recently so called goal oriented error estimators have been shown a lot of interest. The name reflects the fact that they estimate the error in user-defined local quantities. In this thesis the main focus is on global error estimators for highly stretched grids and goal oriented error estimators for flow problems on regular grids. Numerical methods for partial differential equations, such as finite element methods and other similar techniques, typically result in a linear system of equations that needs to be solved. Usually such systems are solved using some iterative procedure which due to a finite number of iterations introduces an additional error. Most such algorithms apply the residual in the stopping criterion, whereas the control of the actual error may be rather poor. A secondary focus in this thesis is on estimating the errors that are introduced during this last part of the solution procedure. The thesis contains new theoretical results regarding the behaviour of some well known, and a few new, a posteriori error estimators for finite element methods on anisotropic grids. Further, a goal oriented strategy for the computation of forces in flow problems is devised and investigated. Finally, an approach for estimating the actual errors associated with the iterative solution of linear systems of equations is suggested. (author)

  14. Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators

    Science.gov (United States)

    Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.

    2018-03-01

    We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.

  15. Distance error correction for time-of-flight cameras

    Science.gov (United States)

    Fuersattel, Peter; Schaller, Christian; Maier, Andreas; Riess, Christian

    2017-06-01

    The measurement accuracy of time-of-flight cameras is limited due to properties of the scene and systematic errors. These errors can accumulate to multiple centimeters which may limit the applicability of these range sensors. In the past, different approaches have been proposed for improving the accuracy of these cameras. In this work, we propose a new method that improves two important aspects of the range calibration. First, we propose a new checkerboard which is augmented by a gray-level gradient. With this addition it becomes possible to capture the calibration features for intrinsic and distance calibration at the same time. The gradient strip allows to acquire a large amount of distance measurements for different surface reflectivities, which results in more meaningful training data. Second, we present multiple new features which are used as input to a random forest regressor. By using random regression forests, we circumvent the problem of finding an accurate model for the measurement error. During application, a correction value for each individual pixel is estimated with the trained forest based on a specifically tailored feature vector. With our approach the measurement error can be reduced by more than 40% for the Mesa SR4000 and by more than 30% for the Microsoft Kinect V2. In our evaluation we also investigate the impact of the individual forest parameters and illustrate the importance of the individual features.

  16. TH-B-BRC-01: How to Identify and Resolve Potential Clinical Errors

    Energy Technology Data Exchange (ETDEWEB)

    Das, I. [NYU Langone Medical Center, New York, NY (United States)

    2016-06-15

    Radiation treatment consists of a chain of events influenced by the quality of machine operation, beam data commissioning, machine calibration, patient specific data, simulation, treatment planning, imaging and treatment delivery. There is always a chance that the clinical medical physicist may make or fail to detect an error in one of the events that may impact on the patient’s treatment. In the clinical scenario, errors may be systematic and, without peer review, may have a low detectability because they are not part of routine QA procedures. During treatment, there might be errors on machine that needs attention. External reviews of some of the treatment delivery components by independent reviewers, like IROC, can detect errors, but may not be timely. The goal of this session is to help junior clinical physicists identify potential errors as well as the approach of quality assurance to perform a root cause analysis to find and eliminate an error and to continually monitor for errors. A compilation of potential errors will be presented by examples of the thought process required to spot the error and determine the root cause. Examples may include unusual machine operation, erratic electrometer reading, consistent lower electron output, variation in photon output, body parts inadvertently left in beam, unusual treatment plan, poor normalization, hot spots etc. Awareness of the possibility and detection of error in any link of the treatment process chain will help improve the safe and accurate delivery of radiation to patients. Four experts will discuss how to identify errors in four areas of clinical treatment. D. Followill, NIH grant CA 180803.

  17. TH-B-BRC-01: How to Identify and Resolve Potential Clinical Errors

    International Nuclear Information System (INIS)

    Das, I.

    2016-01-01

    Radiation treatment consists of a chain of events influenced by the quality of machine operation, beam data commissioning, machine calibration, patient specific data, simulation, treatment planning, imaging and treatment delivery. There is always a chance that the clinical medical physicist may make or fail to detect an error in one of the events that may impact on the patient’s treatment. In the clinical scenario, errors may be systematic and, without peer review, may have a low detectability because they are not part of routine QA procedures. During treatment, there might be errors on machine that needs attention. External reviews of some of the treatment delivery components by independent reviewers, like IROC, can detect errors, but may not be timely. The goal of this session is to help junior clinical physicists identify potential errors as well as the approach of quality assurance to perform a root cause analysis to find and eliminate an error and to continually monitor for errors. A compilation of potential errors will be presented by examples of the thought process required to spot the error and determine the root cause. Examples may include unusual machine operation, erratic electrometer reading, consistent lower electron output, variation in photon output, body parts inadvertently left in beam, unusual treatment plan, poor normalization, hot spots etc. Awareness of the possibility and detection of error in any link of the treatment process chain will help improve the safe and accurate delivery of radiation to patients. Four experts will discuss how to identify errors in four areas of clinical treatment. D. Followill, NIH grant CA 180803

  18. Measurement Error and Bias in Value-Added Models. Research Report. ETS RR-17-25

    Science.gov (United States)

    Kane, Michael T.

    2017-01-01

    By aggregating residual gain scores (the differences between each student's current score and a predicted score based on prior performance) for a school or a teacher, value-added models (VAMs) can be used to generate estimates of school or teacher effects. It is known that random errors in the prior scores will introduce bias into predictions of…

  19. Evaluation of image-guidance protocols in the treatment of head and neck cancers

    International Nuclear Information System (INIS)

    Zeidan, Omar A.; Langen, Katja M.; Meeks, Sanford L.; Manon, Rafael R.; Wagner, Thomas H.; Willoughby, Twyla R.; Jenkins, D. Wayne; Kupelian, Patrick A.

    2007-01-01

    Purpose: The aim of this study was to assess the residual setup error of different image-guidance (IG) protocols in the alignment of patients with head and neck cancer. The protocols differ in the percentage of treatment fractions that are associated with image guidance. Using data from patients who were treated with daily IG, the residual setup errors for several different protocols are retrospectively calculated. Methods and Materials: Alignment data from 24 patients (802 fractions) treated with daily IG on a helical tomotherapy unit were analyzed. The difference between the daily setup correction and the setup correction that would have been made according to a specific protocol was used to calculate the residual setup errors for each protocol. Results: The different protocols are generally effective in reducing systematic setup errors. Random setup errors are generally not reduced for fractions that are not image guided. As a consequence, if every other treatment is image guided, still about 11% of all treatments (IG and not IG) are subject to three-dimensional setup errors of at least 5 mm. This frequency increases to about 29% if setup errors >3 mm are scored. For various protocols that require 15% to 31% of the treatments to be image guided, from 50% to 60% and from 26% to 31% of all fractions are subject to setup errors >3 mm and >5 mm, respectively. Conclusion: Residual setup errors reduce with increasing frequency of IG during the course of external-beam radiotherapy for head-and-neck cancer patients. The inability to reduce random setup errors for fractions that are not image guided results in notable residual setup errors

  20. Errors in the estimation method for the rejection of vibrations in adaptive optics systems

    Science.gov (United States)

    Kania, Dariusz

    2017-06-01

    In recent years the problem of the mechanical vibrations impact in adaptive optics (AO) systems has been renewed. These signals are damped sinusoidal signals and have deleterious effect on the system. One of software solutions to reject the vibrations is an adaptive method called AVC (Adaptive Vibration Cancellation) where the procedure has three steps: estimation of perturbation parameters, estimation of the frequency response of the plant, update the reference signal to reject/minimalize the vibration. In the first step a very important problem is the estimation method. A very accurate and fast (below 10 ms) estimation method of these three parameters has been presented in several publications in recent years. The method is based on using the spectrum interpolation and MSD time windows and it can be used to estimate multifrequency signals. In this paper the estimation method is used in the AVC method to increase the system performance. There are several parameters that affect the accuracy of obtained results, e.g. CiR - number of signal periods in a measurement window, N - number of samples in the FFT procedure, H - time window order, SNR, b - number of ADC bits, γ - damping ratio of the tested signal. Systematic errors increase when N, CiR, H decrease and when γ increases. The value for systematic error is approximately 10^-10 Hz/Hz for N = 2048 and CiR = 0.1. This paper presents equations that can used to estimate maximum systematic errors for given values of H, CiR and N before the start of the estimation process.

  1. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    Science.gov (United States)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  2. SU-E-I-83: Error Analysis of Multi-Modality Image-Based Volumes of Rodent Solid Tumors Using a Preclinical Multi-Modality QA Phantom

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Y [University of Kansas Hospital, Kansas City, KS (United States); Fullerton, G; Goins, B [University of Texas Health Science Center at San Antonio, San Antonio, TX (United States)

    2015-06-15

    Purpose: In our previous study a preclinical multi-modality quality assurance (QA) phantom that contains five tumor-simulating test objects with 2, 4, 7, 10 and 14 mm diameters was developed for accurate tumor size measurement by researchers during cancer drug development and testing. This study analyzed the errors during tumor volume measurement from preclinical magnetic resonance (MR), micro-computed tomography (micro- CT) and ultrasound (US) images acquired in a rodent tumor model using the preclinical multi-modality QA phantom. Methods: Using preclinical 7-Tesla MR, US and micro-CT scanners, images were acquired of subcutaneous SCC4 tumor xenografts in nude rats (3–4 rats per group; 5 groups) along with the QA phantom using the same imaging protocols. After tumors were excised, in-air micro-CT imaging was performed to determine reference tumor volume. Volumes measured for the rat tumors and phantom test objects were calculated using formula V = (π/6)*a*b*c where a, b and c are the maximum diameters in three perpendicular dimensions determined by the three imaging modalities. Then linear regression analysis was performed to compare image-based tumor volumes with the reference tumor volume and known test object volume for the rats and the phantom respectively. Results: The slopes of regression lines for in-vivo tumor volumes measured by three imaging modalities were 1.021, 1.101 and 0.862 for MRI, micro-CT and US respectively. For phantom, the slopes were 0.9485, 0.9971 and 0.9734 for MRI, micro-CT and US respectively. Conclusion: For both animal and phantom studies, random and systematic errors were observed. Random errors were observer-dependent and systematic errors were mainly due to selected imaging protocols and/or measurement method. In the animal study, there were additional systematic errors attributed to ellipsoidal assumption for tumor shape. The systematic errors measured using the QA phantom need to be taken into account to reduce measurement

  3. Hadronization systematics and top mass reconstruction

    Directory of Open Access Journals (Sweden)

    Corcella Gennaro

    2014-01-01

    Full Text Available I discuss a few issues related to the systematic error on the top mass mea- surement at hadron colliders, due to hadronization effects. Special care is taken about the impact of bottom-quark fragmentation in top decays, especially on the reconstruction relying on final states with leptons and J/Ψ in the dilepton channel. I also debate the relation between the measured mass and its theoretical definition, and report on work in progress, based on the Monte Carlo simulation of fictitious top-flavoured hadrons, which may shed light on this issue and on the hadronization systematics.

  4. Effects of waveform model systematics on the interpretation of GW150914

    Science.gov (United States)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Ananyeva, A.; Anderson, S. B.; Anderson, W. G.; Appert, S.; Arai, K.; Araya, M. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Avila-Alvarez, A.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; E Barclay, S.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Beer, C.; Bejger, M.; Belahcene, I.; Belgin, M.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Billman, C. R.; Birch, J.; Birney, R.; Birnholtz, O.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackman, J.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Boer, M.; Bogaert, G.; Bohe, A.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; E Brau, J.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; E Broida, J.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T. A.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, H.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y.; Cheng, H.-P.; Chincarini, A.; Chiummo, A.; Chmiel, T.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, A. J. K.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Cocchieri, C.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M., Jr.; Conti, L.; Cooper, S. J.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Covas, P. B.; E Cowan, E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; E Creighton, J. D.; Creighton, T. D.; Cripe, J.; Crowder, S. G.; Cullen, T. J.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Davis, D.; Daw, E. J.; Day, B.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devenson, J.; Devine, R. C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Doctor, Z.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorrington, I.; Douglas, R.; Dovale Álvarez, M.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; E Dwyer, S.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Eisenstein, R. A.; Essick, R. C.; Etienne, Z.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Farinon, S.; Farr, B.; Farr, W. M.; Fauchon-Jones, E. J.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Fernández Galiana, A.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fong, H.; Forsyth, S. S.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fries, E. M.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H.; Gadre, B. U.; Gaebel, S. M.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gaur, G.; Gayathri, V.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghonge, S.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gorodetsky, M. L.; E Gossan, S.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; E Gushwa, K.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; E Holz, D.; Hopkins, P.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Junker, J.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kéfélian, F.; Keitel, D.; Kelley, D. B.; Kennedy, R.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chunglee; Kim, J. C.; Kim, Whansun; Kim, W.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kirchhoff, R.; Kissel, J. S.; Klein, B.; Kleybolte, L.; Klimenko, S.; Koch, P.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Krämer, C.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lang, R. N.; Lange, J.; Lantz, B.; Lanza, R. K.; Lartaux-Vollard, A.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lehmann, J.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Liu, J.; Lockerbie, N. A.; Lombardi, A. L.; London, L. T.; E Lord, J.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lovelace, G.; Lück, H.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Macfoy, S.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; E McClelland, D.; McCormick, S.; McGrath, C.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; E Mikhailov, E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Muniz, E. A. M.; Murray, P. G.; Mytidis, A.; Napier, K.; Nardecchia, I.; Naticchioni, L.; Nelemans, G.; Nelson, T. J. N.; Neri, M.; Nery, M.; Neunzert, A.; Newport, J. M.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Noack, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; E Pace, A.; Page, J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perez, C. J.; Perreca, A.; Perri, L. M.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Pratt, J. W. W.; Predoi, V.; Prestegard, T.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Rhoades, E.; Ricci, F.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, J. D.; Romano, R.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sampson, L. M.; Sanchez, E. J.; Sandberg, V.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Scheuer, J.; Schmidt, E.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Schwalbe, S. G.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Setyawati, Y.; Shaddock, D. A.; Shaffer, T. J.; Shahriar, M. S.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, B.; Smith, J. R.; E Smith, R. J.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Spencer, A. P.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stevenson, S. P.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; E Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Taracchini, A.; Taylor, R.; Theeg, T.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Tippens, T.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tomlinson, C.; Tonelli, M.; Tornasi, Z.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Trinastic, J.; Tringali, M. C.; Trozzo, L.; Tse, M.; Tso, R.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Varma, V.; Vass, S.; Vasúth, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Venugopalan, G.; Verkindt, D.; Vetrano, F.; Viceré, A.; Viets, A. D.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; E Wade, L.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Watchi, J.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Whittle, C.; Williams, D.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Worden, J.; Wright, J. L.; Wu, D. S.; Wu, G.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, Hang; Yu, Haocun; Yvert, M.; Zadrożny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, T.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, S. J.; Zhu, X. J.; E Zucker, M.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration; Boyle, M.; Chu, T.; Hemberger, D.; Hinder, I.; E Kidder, L.; Ossokine, S.; Scheel, M.; Szilagyi, B.; Teukolsky, S.; Vano Vinuales, A.

    2017-05-01

    Parameter estimates of GW150914 were obtained using Bayesian inference, based on three semi-analytic waveform models for binary black hole coalescences. These waveform models differ from each other in their treatment of black hole spins, and all three models make some simplifying assumptions, notably to neglect sub-dominant waveform harmonic modes and orbital eccentricity. Furthermore, while the models are calibrated to agree with waveforms obtained by full numerical solutions of Einstein’s equations, any such calibration is accurate only to some non-zero tolerance and is limited by the accuracy of the underlying phenomenology, availability, quality, and parameter-space coverage of numerical simulations. This paper complements the original analyses of GW150914 with an investigation of the effects of possible systematic errors in the waveform models on estimates of its source parameters. To test for systematic errors we repeat the original Bayesian analysis on mock signals from numerical simulations of a series of binary configurations with parameters similar to those found for GW150914. Overall, we find no evidence for a systematic bias relative to the statistical error of the original parameter recovery of GW150914 due to modeling approximations or modeling inaccuracies. However, parameter biases are found to occur for some configurations disfavored by the data of GW150914: for binaries inclined edge-on to the detector over a small range of choices of polarization angles, and also for eccentricities greater than  ˜0.05. For signals with higher signal-to-noise ratio than GW150914, or in other regions of the binary parameter space (lower masses, larger mass ratios, or higher spins), we expect that systematic errors in current waveform models may impact gravitational-wave measurements, making more accurate models desirable for future observations.

  5. Error and objectivity: cognitive illusions and qualitative research.

    Science.gov (United States)

    Paley, John

    2005-07-01

    Psychological research has shown that cognitive illusions, of which visual illusions are just a special case, are systematic and pervasive, raising epistemological questions about how error in all forms of research can be identified and eliminated. The quantitative sciences make use of statistical techniques for this purpose, but it is not clear what the qualitative equivalent is, particularly in view of widespread scepticism about validity and objectivity. I argue that, in the light of cognitive psychology, the 'error question' cannot be dismissed as a positivist obsession, and that the concepts of truth and objectivity are unavoidable. However, they constitute only a 'minimal realism', which does not necessarily bring a commitment to 'absolute' truth, certainty, correspondence, causation, reductionism, or universal laws in its wake. The assumption that it does reflects a misreading of positivism and, ironically, precipitates a 'crisis of legitimation and representation', as described by constructivist authors.

  6. Density-functional errors in ionization potential with increasing system size

    Energy Technology Data Exchange (ETDEWEB)

    Whittleton, Sarah R.; Sosa Vazquez, Xochitl A.; Isborn, Christine M., E-mail: cisborn@ucmerced.edu [Chemistry and Chemical Biology, School of Natural Sciences, University of California, Merced, 5200 North Lake Road, Merced, California 95343 (United States); Johnson, Erin R., E-mail: erin.johnson@dal.ca [Chemistry and Chemical Biology, School of Natural Sciences, University of California, Merced, 5200 North Lake Road, Merced, California 95343 (United States); Department of Chemistry, Dalhousie University, 6274 Coburg Road, Halifax, Nova Scotia B3H 4R2 (Canada)

    2015-05-14

    This work investigates the effects of molecular size on the accuracy of density-functional ionization potentials for a set of 28 hydrocarbons, including series of alkanes, alkenes, and oligoacenes. As the system size increases, delocalization error introduces a systematic underestimation of the ionization potential, which is rationalized by considering the fractional-charge behavior of the electronic energies. The computation of the ionization potential with many density-functional approximations is not size-extensive due to excessive delocalization of the incipient positive charge. While inclusion of exact exchange reduces the observed errors, system-specific tuning of long-range corrected functionals does not generally improve accuracy. These results emphasize that good performance of a functional for small molecules is not necessarily transferable to larger systems.

  7. Statistical evaluation of major human errors during the development of new technological systems

    International Nuclear Information System (INIS)

    Campbell, G; Ott, K.O.

    1979-01-01

    Statistical procedures are presented to evaluate major human errors during the development of a new system, errors that have led or can lead to accidents or major failures. The first procedure aims at estimating the average residual occurrence rate for s or major failures after several have occurred. The procedure is solely based on the historical record. Certain idealizations are introduced that allow the application of a sound statistical evaluation procedure. These idealizations are practically realized to a sufficient degree such that the proposed estimation procedure yields meaningful results, even for situations with a sparse data base, represented by very few accidents. Under the assumption that the possible human-error-related failure times have exponential distributions, the statistical technique of isotonic regression is proposed to estimate the failure rates due to human design error at the failure times of the system. The last value in the sequence of estimates gives the residual accident chance. In addition, theactual situation is tested against the hypothesis that the failure rate of the system remains constant over time. This test determines the chance for a decreasing failure rate being incidental, rather than an indication of an actual learning process. Both techniques can be applied not merely to a single system but to an entire series of similar systems that a technology would generate, enabling the assessment of technological improvement. For the purpose of illustration, the nuclear decay of isotopes was chosen as an example, since the assumptions of the model are rigorously satisfied in this case. This application shows satisfactory agreement of the estimated and actual failure rates (which are exactly known in this example), although the estimation was deliberately based on a sparse historical record

  8. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  9. Statistical evaluation of design-error related accidents

    International Nuclear Information System (INIS)

    Ott, K.O.; Marchaterre, J.F.

    1980-01-01

    In a recently published paper (Campbell and Ott, 1979), a general methodology was proposed for the statistical evaluation of design-error related accidents. The evaluation aims at an estimate of the combined residual frequency of yet unknown types of accidents lurking in a certain technological system. Here, the original methodology is extended, as to apply to a variety of systems that evolves during the development of large-scale technologies. A special categorization of incidents and accidents is introduced to define the events that should be jointly analyzed. The resulting formalism is applied to the development of the nuclear power reactor technology, considering serious accidents that involve in the accident-progression a particular design inadequacy

  10. Pocket book {sup E}xpectations of operating personnel action and card criteria, previous meeting and precursor of error; Libro de bolsillo expectativas de actuacion del personal de operacion y tarjeta criterios reunion previa y precursores de error

    Energy Technology Data Exchange (ETDEWEB)

    Rodrigo Gonzalez, M.

    2012-07-01

    We have developed a pocket manual of performance expectations of operating personnel. Additionally, it has created a card pocket systematizing the application of previous meetings (pre-job) depending on the existence of error precursors and following the commission of an error. This manual serves to communicate expectations and performance expected to the Operation Staff. The results show a positive change in a short period of time working practices, both in training (simulator) and control room.

  11. FIB-based measurement of local residual stresses on microsystems

    Science.gov (United States)

    Vogel, Dietmar; Sabate, Neus; Gollhardt, Astrid; Keller, Juergen; Auersperg, Juergen; Michel, Bernd

    2006-03-01

    The paper comprises research results obtained for stress determination on micro and nanotechnology components. It meets the concern of controlling stresses introduced to sensors, MEMS and electronics devices during different micromachining processes. The method bases on deformation measurement options made available inside focused ion beam equipment. Removing locally material by ion beam milling existing stresses / residual stresses lead to deformation fields around the milled feature. Digital image correlation techniques are used to extract deformation values from micrographs captured before and after milling. In the paper, two main milling features have been analyzed - through hole and through slit milling. Analytical solutions for stress release fields of in-plane stresses have been derived and compared to respective experimental findings. Their good agreement allows to settle a method for determination of residual stress values, which is demonstrated for thin membranes manufactured by silicon micro technology. Some emphasis is made on the elimination of main error sources for stress determination, like rigid body object displacements and rotations due to drifts of experimental conditions under FIB imaging. In order to illustrate potential application areas of the method residual stress suppression by ion implantation is evaluated by the method and reported here.

  12. The Curious Anomaly of Skewed Judgment Distributions and Systematic Error in the Wisdom of Crowds

    DEFF Research Database (Denmark)

    Nash, Ulrik William

    2014-01-01

    about true values, when neurons categorize cues better than chance, and when the particular true value is extreme compared to what is typical and anchored upon, then populations of judges form skewed judgment distributions with high probability. Moreover, the collective error made by these people can...... positively with collective error, thereby challenging what is commonly believed about how diversity and collective intelligence relate. Data from 3053 judgment surveys about US macroeconomic variables obtained from the Federal Reserve Bank of Philadelphia and the Wall Street Journal provide strong support...

  13. Markov chain beam randomization: a study of the impact of PLANCK beam measurement errors on cosmological parameter estimation

    Science.gov (United States)

    Rocha, G.; Pagano, L.; Górski, K. M.; Huffenberger, K. M.; Lawrence, C. R.; Lange, A. E.

    2010-04-01

    We introduce a new method to propagate uncertainties in the beam shapes used to measure the cosmic microwave background to cosmological parameters determined from those measurements. The method, called markov chain beam randomization (MCBR), randomly samples from a set of templates or functions that describe the beam uncertainties. The method is much faster than direct numerical integration over systematic “nuisance” parameters, and is not restricted to simple, idealized cases as is analytic marginalization. It does not assume the data are normally distributed, and does not require Gaussian priors on the specific systematic uncertainties. We show that MCBR properly accounts for and provides the marginalized errors of the parameters. The method can be generalized and used to propagate any systematic uncertainties for which a set of templates is available. We apply the method to the Planck satellite, and consider future experiments. Beam measurement errors should have a small effect on cosmological parameters as long as the beam fitting is performed after removal of 1/f noise.

  14. SNP discovery in nonmodel organisms: strand bias and base-substitution errors reduce conversion rates.

    Science.gov (United States)

    Gonçalves da Silva, Anders; Barendse, William; Kijas, James W; Barris, Wes C; McWilliam, Sean; Bunch, Rowan J; McCullough, Russell; Harrison, Blair; Hoelzel, A Rus; England, Phillip R

    2015-07-01

    Single nucleotide polymorphisms (SNPs) have become the marker of choice for genetic studies in organisms of conservation, commercial or biological interest. Most SNP discovery projects in nonmodel organisms apply a strategy for identifying putative SNPs based on filtering rules that account for random sequencing errors. Here, we analyse data used to develop 4723 novel SNPs for the commercially important deep-sea fish, orange roughy (Hoplostethus atlanticus), to assess the impact of not accounting for systematic sequencing errors when filtering identified polymorphisms when discovering SNPs. We used SAMtools to identify polymorphisms in a velvet assembly of genomic DNA sequence data from seven individuals. The resulting set of polymorphisms were filtered to minimize 'bycatch'-polymorphisms caused by sequencing or assembly error. An Illumina Infinium SNP chip was used to genotype a final set of 7714 polymorphisms across 1734 individuals. Five predictors were examined for their effect on the probability of obtaining an assayable SNP: depth of coverage, number of reads that support a variant, polymorphism type (e.g. A/C), strand-bias and Illumina SNP probe design score. Our results indicate that filtering out systematic sequencing errors could substantially improve the efficiency of SNP discovery. We show that BLASTX can be used as an efficient tool to identify single-copy genomic regions in the absence of a reference genome. The results have implications for research aiming to identify assayable SNPs and build SNP genotyping assays for nonmodel organisms. © 2014 John Wiley & Sons Ltd.

  15. Evaluation of Stability of Complexes of Inner Transition Metal Ions with 2-Oxo-1-pyrrolidine Acetamide and Role of Systematic Errors

    Directory of Open Access Journals (Sweden)

    Sangita Sharma

    2011-01-01

    Full Text Available BEST FIT models were used to study the complexation of inner transition metal ions like Y(III, La(III, Ce(III, Pr(III, Nd(III, Sm(III, Gd(III, Dy(III and Th(IV with 2-oxo-1-pyrrolidine acetamide at 30 °C in 10%, 20, 30, 40, 50% and 60% v/v dioxane-water mixture at 0.2 M ionic strength. Irving Rossotti titration method was used to get titration data. Calculations were carried out with PKAS and BEST Fortran IV computer programs. The expected species like L, LH+, ML, ML2 and ML(OH3, were obtained with SPEPLOT. Stability of complexes has increased with increasing the dioxane content. The observed change in stability can be explained on the basis of electrostatic effects, non electrostatic effects, solvating power of solvent mixture, interaction between ions and interaction of ions with solvents. Effect of systematic errors like effect of dissolved carbon dioxide, concentration of alkali, concentration of acid, concentration of ligand and concentration of metal have also been explained here.

  16. Residual diesel measurement in sand columns after surfactant/alcohol washing

    International Nuclear Information System (INIS)

    Martel, R.; Gelinas, P.J.

    1996-01-01

    A new simple gravimetric technique has been designed to determine residual oil saturation of complex hydrocarbon mixtures (e.g., diesel) in sand column experiments because reliable methods are lacking. The He/N 2 technique is based on drying of sand columns by circulating helium gas to drag oil droplets in a cold trap (liquid nitrogen). With this technique, residual diesel measurement can be performed easily immediately after alcohol/surfactant washing and in the same lab. For high residual diesel content in Ottawa sand (25 to 30 g/kg), the technique is much more accurate (± 2% or 600 mg/kg) than the standard analytical methods for the determination of mineral oil and grease. The average relative error on partial diesel dissolution in sand column estimated after alcohol/surfactant flooding (residual saturation of 10 to 15 g/kg) is as low as 5%. The precision of the He/N 2 technique is adequate to compare relative efficiency of washing solutions when partial extraction of residual oil in Ottawa sand columns is performed. However, this technique is not adapted for determination of traces of oil in sediment or for environmental control of contaminated soils. Each diesel determination by the He/N 2 technique costs less than $8 in chemical products (helium and liquid nitrogen). A simple laboratory drying setup can be built for less than $400 which makes this technique valuable for diesel analyses when a large number of tests are required

  17. Resolution and systematic limitations in beam based alignment

    Energy Technology Data Exchange (ETDEWEB)

    Tenenbaum, P.G.

    2000-03-15

    Beam based alignment of quadrupoles by variation of quadrupole strength is a widely-used technique in accelerators today. The authors describe the dominant systematic limitation of this technique, which arises from the change in the center position of the quadrupole as the strength is varied, and derive expressions for the resulting error. In addition, the authors derive an expression for the statistical resolution of such techniques in a periodic transport line, given knowledge of the line's transport matrices, the resolution of the beam position monitor system, and the details of the strength variation procedure. These results are applied to the Next Linear Collider main linear accelerator, an 11 kilometer accelerator containing 750 quadrupoles and 5,000 accelerator structures. The authors find that in principle a statistical resolution of 1 micron is easily achievable but the systematic error due to variation of the magnetic centers could be several times larger.

  18. The error analysis of field size variation in pelvis region by using immobilization device

    International Nuclear Information System (INIS)

    Kim, Ki Hwan; Kang, No Hyun; Kim, Dong Wuk; Kim, Jun Sang; Jang, Ji Young; Kim, Jae Sung; Kim, Yong Eun; Cho, Moon June

    2000-01-01

    In radiotherapy, it may happen to radiate surrounding normal tissue because of inconsistent field size by changing patient position during treatment. We are going to analyze errors reduced by using immobilization device with Electronic Portal Imaging Device(EPID) in this study. We had treated the twenty-one patients in pelvic region with 10 MV X-ray from Aug. 1998 to Aug. 1999 at chungnam National University Hospital. All patients were treated at supine position during treatment. They were separated to two groups, 11 patients without device and 10 patients with immobilization device. We used styrofoam for immobilization device and measured the error of anterior direction for x, y axis and lateral direction for z, y axis from simulation film to EPID image using matching technique. For no immobilization device group, the mean deviation values of x axis and y axis are 0.19 mm. 0.48 mm, respectively and the standard deviations of systematic deviation are 2.38 mm, 2.19 mm, respectively and of random deviation for x axis and y axis are 1.92 mm. 1.29 mm, respectively. The mean deviation values of z axis and y axis are -3.61 mm. 2.07 mm, respectively and the standard deviations of systematic deviation are 3.20 mm, 2.29 mm, respectively and of random deviation for z axis and y axis are 2.73 mm. 1.62 mm, respectively. For immobilization device group, the mean deviation values of x axis and y axis are 0.71 mm. -1.07 mm, respectively and the standard deviations of systematic deviation are 1.80 mm, 2.26 mm, respectively and the standard deviations of systematic deviation are 1.80 mm, 2.26 mm, respectively of random deviation for x axis and y axis are 1.56 mm. 1.27 mm, respectively. The mean deviation values of z axis and y axis are -1.76 mm. 1.08 mm, respectively and the standard deviations of systematic deviation are 1.87 mm, 2.83 mm, respectively and of random deviation for x axis and y axis are 1.68 mm, 1.65 mm, respectively. Because of reducing random and systematic error

  19. Accuracy of crystal structure error estimates

    International Nuclear Information System (INIS)

    Taylor, R.; Kennard, O.

    1986-01-01

    A statistical analysis of 100 crystal structures retrieved from the Cambridge Structural Database is reported. Each structure has been determined independently by two different research groups. Comparison of the independent results leads to the following conclusions: (a) The e.s.d.'s of non-hydrogen-atom positional parameters are almost invariably too small. Typically, they are underestimated by a factor of 1.4-1.45. (b) The extent to which e.s.d.'s are underestimated varies significantly from structure to structure and from atom to atom within a structure. (c) Errors in the positional parameters of atoms belonging to the same chemical residue tend to be positively correlated. (d) The e.s.d.'s of heavy-atom positions are less reliable than those of light-atom positions. (e) Experimental errors in atomic positional parameters are normally, or approximately normally, distributed. (f) The e.s.d.'s of cell parameters are grossly underestimated, by an average factor of about 5 for cell lengths and 2.5 for cell angles. There is marginal evidence that the accuracy of atomic-coordinate e.s.d.'s also depends on diffractometer geometry, refinement procedure, whether or not the structure has a centre of symmetry, and the degree of precision attained in the structure determination. (orig.)

  20. Large-scale simulations of error-prone quantum computation devices

    Energy Technology Data Exchange (ETDEWEB)

    Trieu, Doan Binh

    2009-07-01

    The theoretical concepts of quantum computation in the idealized and undisturbed case are well understood. However, in practice, all quantum computation devices do suffer from decoherence effects as well as from operational imprecisions. This work assesses the power of error-prone quantum computation devices using large-scale numerical simulations on parallel supercomputers. We present the Juelich Massively Parallel Ideal Quantum Computer Simulator (JUMPIQCS), that simulates a generic quantum computer on gate level. It comprises an error model for decoherence and operational errors. The robustness of various algorithms in the presence of noise has been analyzed. The simulation results show that for large system sizes and long computations it is imperative to actively correct errors by means of quantum error correction. We implemented the 5-, 7-, and 9-qubit quantum error correction codes. Our simulations confirm that using error-prone correction circuits with non-fault-tolerant quantum error correction will always fail, because more errors are introduced than being corrected. Fault-tolerant methods can overcome this problem, provided that the single qubit error rate is below a certain threshold. We incorporated fault-tolerant quantum error correction techniques into JUMPIQCS using Steane's 7-qubit code and determined this threshold numerically. Using the depolarizing channel as the source of decoherence, we find a threshold error rate of (5.2{+-}0.2) x 10{sup -6}. For Gaussian distributed operational over-rotations the threshold lies at a standard deviation of 0.0431{+-}0.0002. We can conclude that quantum error correction is especially well suited for the correction of operational imprecisions and systematic over-rotations. For realistic simulations of specific quantum computation devices we need to extend the generic model to dynamic simulations, i.e. time-dependent Hamiltonian simulations of realistic hardware models. We focus on today's most advanced

  1. Measuring the residual stress of transparent conductive oxide films on PET by the double-beam shadow Moiré interferometer

    Science.gov (United States)

    Chen, Hsi-Chao; Huang, Kuo-Ting; Lo, Yen-Ming; Chiu, Hsuan-Yi; Chen, Guan-Jhen

    2011-09-01

    The purpose of this research was to construct a measurement system which can fast and accurately analyze the residual stress of the flexible electronics. The transparent conductive oxide (TCO) films, tin-doped indium oxide (ITO), were deposited by radio frequency (RF) magnetron sputtering using corresponding oxide targets on PET substrate. As we know that the shadow Moiré interferometry is a useable way to measure the large deformation. So we set up a double beam shadow Moiré interferometer to measure and analyze the residual stress of TCO films on PET. The feature was to develop a mathematical model and combine the image processing software. By the LabVIEW graphical software, we could measure the distance which is between the left and right fringe on the pattern to solve the curvature of deformed surface. Hence, the residual stress could calculate by the Stoney correction formula for the flexible electronics. By combining phase shifting method with shadow Moiré, the measurement resolution and accuracy have been greatly improved. We also had done the error analysis for the system whose relative error could be about 2%. Therefore, shadow Moiré interferometer is a non-destructive, fast, and simple system for the residual stress on TCO/PET films.

  2. Utilizing measure-based feedback in control-mastery theory: A clinical error.

    Science.gov (United States)

    Snyder, John; Aafjes-van Doorn, Katie

    2016-09-01

    Clinical errors and ruptures are an inevitable part of clinical practice. Often times, therapists are unaware that a clinical error or rupture has occurred, leaving no space for repair, and potentially leading to patient dropout and/or less effective treatment. One way to overcome our blind spots is by frequently and systematically collecting measure-based feedback from the patient. Patient feedback measures that focus on the process of psychotherapy such as the Patient's Experience of Attunement and Responsiveness scale (PEAR) can be used in conjunction with treatment outcome measures such as the Outcome Questionnaire 45.2 (OQ-45.2) to monitor the patient's therapeutic experience and progress. The regular use of these types of measures can aid clinicians in the identification of clinical errors and the associated patient deterioration that might otherwise go unnoticed and unaddressed. The current case study describes an instance of clinical error that occurred during the 2-year treatment of a highly traumatized young woman. The clinical error was identified using measure-based feedback and subsequently understood and addressed from the theoretical standpoint of the control-mastery theory of psychotherapy. An alternative hypothetical response is also presented and explained using control-mastery theory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  3. Systematic errors in digital volume correlation due to the self-heating effect of a laboratory x-ray CT scanner

    International Nuclear Information System (INIS)

    Wang, B; Pan, B; Tao, R; Lubineau, G

    2017-01-01

    The use of digital volume correlation (DVC) in combination with a laboratory x-ray computed tomography (CT) for full-field internal 3D deformation measurement of opaque materials has flourished in recent years. During x-ray tomographic imaging, the heat generated by the x-ray tube changes the imaging geometry of x-ray scanner, and further introduces noticeable errors in DVC measurements. In this work, to provide practical guidance high-accuracy DVC measurement, the errors in displacements and strains measured by DVC due to the self-heating for effect of a commercially available x-ray scanner were experimentally investigated. The errors were characterized by performing simple rescan tests with different scan durations. The results indicate that the maximum strain errors associated with the self-heating of the x-ray scanner exceed 400 µε . Possible approaches for minimizing or correcting these displacement and strain errors are discussed. Finally, a series of translation and uniaxial compression tests were performed, in which strain errors were detected and then removed using pre-established artificial dilatational strain-time curve. Experimental results demonstrate the efficacy and accuracy of the proposed strain error correction approach. (paper)

  4. Systematic errors in digital volume correlation due to the self-heating effect of a laboratory x-ray CT scanner

    KAUST Repository

    Wang, B

    2017-02-15

    The use of digital volume correlation (DVC) in combination with a laboratory x-ray computed tomography (CT) for full-field internal 3D deformation measurement of opaque materials has flourished in recent years. During x-ray tomographic imaging, the heat generated by the x-ray tube changes the imaging geometry of x-ray scanner, and further introduces noticeable errors in DVC measurements. In this work, to provide practical guidance high-accuracy DVC measurement, the errors in displacements and strains measured by DVC due to the self-heating for effect of a commercially available x-ray scanner were experimentally investigated. The errors were characterized by performing simple rescan tests with different scan durations. The results indicate that the maximum strain errors associated with the self-heating of the x-ray scanner exceed 400 µε. Possible approaches for minimizing or correcting these displacement and strain errors are discussed. Finally, a series of translation and uniaxial compression tests were performed, in which strain errors were detected and then removed using pre-established artificial dilatational strain-time curve. Experimental results demonstrate the efficacy and accuracy of the proposed strain error correction approach.

  5. THE SYSTEMATICS OF STRONG LENS MODELING QUANTIFIED: THE EFFECTS OF CONSTRAINT SELECTION AND REDSHIFT INFORMATION ON MAGNIFICATION, MASS, AND MULTIPLE IMAGE PREDICTABILITY

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Traci L.; Sharon, Keren, E-mail: tljohn@umich.edu [University of Michigan, Department of Astronomy, 1085 South University Avenue, Ann Arbor, MI 48109-1107 (United States)

    2016-11-20

    Until now, systematic errors in strong gravitational lens modeling have been acknowledged but have never been fully quantified. Here, we launch an investigation into the systematics induced by constraint selection. We model the simulated cluster Ares 362 times using random selections of image systems with and without spectroscopic redshifts and quantify the systematics using several diagnostics: image predictability, accuracy of model-predicted redshifts, enclosed mass, and magnification. We find that for models with >15 image systems, the image plane rms does not decrease significantly when more systems are added; however, the rms values quoted in the literature may be misleading as to the ability of a model to predict new multiple images. The mass is well constrained near the Einstein radius in all cases, and systematic error drops to <2% for models using >10 image systems. Magnification errors are smallest along the straight portions of the critical curve, and the value of the magnification is systematically lower near curved portions. For >15 systems, the systematic error on magnification is ∼2%. We report no trend in magnification error with the fraction of spectroscopic image systems when selecting constraints at random; however, when using the same selection of constraints, increasing this fraction up to ∼0.5 will increase model accuracy. The results suggest that the selection of constraints, rather than quantity alone, determines the accuracy of the magnification. We note that spectroscopic follow-up of at least a few image systems is crucial because models without any spectroscopic redshifts are inaccurate across all of our diagnostics.

  6. Prediction of Active Site and Distal Residues in E. coli DNA Polymerase III alpha Polymerase Activity.

    Science.gov (United States)

    Parasuram, Ramya; Coulther, Timothy A; Hollander, Judith M; Keston-Smith, Elise; Ondrechen, Mary Jo; Beuning, Penny J

    2018-02-20

    The process of DNA replication is carried out with high efficiency and accuracy by DNA polymerases. The replicative polymerase in E. coli is DNA Pol III, which is a complex of 10 different subunits that coordinates simultaneous replication on the leading and lagging strands. The 1160-residue Pol III alpha subunit is responsible for the polymerase activity and copies DNA accurately, making one error per 10 5 nucleotide incorporations. The goal of this research is to determine the residues that contribute to the activity of the polymerase subunit. Homology modeling and the computational methods of THEMATICS and POOL were used to predict functionally important amino acid residues through their computed chemical properties. Site-directed mutagenesis and biochemical assays were used to validate these predictions. Primer extension, steady-state single-nucleotide incorporation kinetics, and thermal denaturation assays were performed to understand the contribution of these residues to the function of the polymerase. This work shows that the top 15 residues predicted by POOL, a set that includes the three previously known catalytic aspartate residues, seven remote residues, plus five previously unexplored first-layer residues, are important for function. Six previously unidentified residues, R362, D405, K553, Y686, E688, and H760, are each essential to Pol III activity; three additional residues, Y340, R390, and K758, play important roles in activity.

  7. Systematic analysis of dependent human errors from the maintenance history at finnish NPPs - A status report

    Energy Technology Data Exchange (ETDEWEB)

    Laakso, K. [VTT Industrial Systems (Finland)

    2002-12-01

    Operating experience has shown missed detection events, where faults have passed inspections and functional tests to operating periods after the maintenance activities during the outage. The causes of these failures have often been complex event sequences, involving human and organisational factors. Especially common cause and other dependent failures of safety systems may significantly contribute to the reactor core damage risk. The topic has been addressed in the Finnish studies of human common cause failures, where experiences on latent human errors have been searched and analysed in detail from the maintenance history. The review of the bulk of the analysis results of the Olkiluoto and Loviisa plant sites shows that the instrumentation and control and electrical equipment is more prone to human error caused failure events than the other maintenance and that plant modifications and also predetermined preventive maintenance are significant sources of common cause failures. Most errors stem from the refuelling and maintenance outage period at the both sites, and less than half of the dependent errors were identified during the same outage. The dependent human errors originating from modifications could be reduced by a more tailored specification and coverage of their start-up testing programs. Improvements could also be achieved by a more case specific planning of the installation inspection and functional testing of complicated maintenance works or work objects of higher plant safety and availability importance. A better use and analysis of condition monitoring information for maintenance steering could also help. The feedback from discussions of the analysis results with plant experts and professionals is still crucial in developing the final conclusions and recommendations that meet the specific development needs at the plants. (au)

  8. Evaluation of Image-Guidance Strategies in the Treatment of Localized Prostate Cancer

    International Nuclear Information System (INIS)

    Kupelian, Patrick A.; Lee, Choonik; Langen, Katja M.; Zeidan, Omar A.; Manon, Rafael R.; Willoughby, Twyla R.; Meeks, Sanford L.

    2008-01-01

    Purpose: To compare different image-guidance strategies in the alignment of prostate cancer patients. Using data from patients treated using daily image guidance, the remaining setup errors for several different strategies were retrospectively calculated. Methods and Materials: The alignment data from 74 patients treated with helical tomotherapy were analyzed, resulting in a data set of 2,252 fractions during which a megavoltage computed tomography image was used for image guidance with intraprostatic metallic fiducials. Given the daily positional adjustments, a variety of protocols, differing in imaging frequency and method, were retrospectively studied. The residual setup errors were determined for each protocol. Results: As expected, the systematic errors were effectively reduced with imaging. However, the random errors were unaffected. Even when image guidance was performed every other day with a running mean of the previous displacements, residual setup errors >5 mm occurred in 24% of all fractions. This frequency increased to about 40% if setup errors >3 mm were scored. Conclusion: Setup errors increased with decreasing frequency of image guidance. However, residual errors were still significant at the 5-mm level, even with imaging was performed every other day. This suggests that localizations must be performed daily in the set up of prostate cancer patients during a course of external beam radiotherapy

  9. Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)

    Science.gov (United States)

    Adler, Robert; Gu, Guojun; Huffman, George

    2012-01-01

    A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a

  10. Adaptive framework to better characterize errors of apriori fluxes and observational residuals in a Bayesian setup for the urban flux inversions.

    Science.gov (United States)

    Ghosh, S.; Lopez-Coto, I.; Prasad, K.; Karion, A.; Mueller, K.; Gourdji, S.; Martin, C.; Whetstone, J. R.

    2017-12-01

    The National Institute of Standards and Technology (NIST) supports the North-East Corridor Baltimore Washington (NEC-B/W) project and Indianapolis Flux Experiment (INFLUX) aiming to quantify sources of Greenhouse Gas (GHG) emissions as well as their uncertainties. These projects employ different flux estimation methods including top-down inversion approaches. The traditional Bayesian inversion method estimates emission distributions by updating prior information using atmospheric observations of Green House Gases (GHG) coupled to an atmospheric and dispersion model. The magnitude of the update is dependent upon the observed enhancement along with the assumed errors such as those associated with prior information and the atmospheric transport and dispersion model. These errors are specified within the inversion covariance matrices. The assumed structure and magnitude of the specified errors can have large impact on the emission estimates from the inversion. The main objective of this work is to build a data-adaptive model for these covariances matrices. We construct a synthetic data experiment using a Kalman Filter inversion framework (Lopez et al., 2017) employing different configurations of transport and dispersion model and an assumed prior. Unlike previous traditional Bayesian approaches, we estimate posterior emissions using regularized sample covariance matrices associated with prior errors to investigate whether the structure of the matrices help to better recover our hypothetical true emissions. To incorporate transport model error, we use ensemble of transport models combined with space-time analytical covariance to construct a covariance that accounts for errors in space and time. A Kalman Filter is then run using these covariances along with Maximum Likelihood Estimates (MLE) of the involved parameters. Preliminary results indicate that specifying sptio-temporally varying errors in the error covariances can improve the flux estimates and uncertainties. We

  11. A Systems Modeling Approach for Risk Management of Command File Errors

    Science.gov (United States)

    Meshkat, Leila

    2012-01-01

    The main cause of commanding errors is often (but not always) due to procedures. Either lack of maturity in the processes, incompleteness of requirements or lack of compliance to these procedures. Other causes of commanding errors include lack of understanding of system states, inadequate communication, and making hasty changes in standard procedures in response to an unexpected event. In general, it's important to look at the big picture prior to making corrective actions. In the case of errors traced back to procedures, considering the reliability of the process as a metric during its' design may help to reduce risk. This metric is obtained by using data from Nuclear Industry regarding human reliability. A structured method for the collection of anomaly data will help the operator think systematically about the anomaly and facilitate risk management. Formal models can be used for risk based design and risk management. A generic set of models can be customized for a broad range of missions.

  12. A multi-sensor burned area algorithm for crop residue burning in northwestern India: validation and sources of error

    Science.gov (United States)

    Liu, T.; Marlier, M. E.; Karambelas, A. N.; Jain, M.; DeFries, R. S.

    2017-12-01

    A leading source of outdoor emissions in northwestern India comes from crop residue burning after the annual monsoon (kharif) and winter (rabi) crop harvests. Agricultural burned area, from which agricultural fire emissions are often derived, can be poorly quantified due to the mismatch between moderate-resolution satellite sensors and the relatively small size and short burn period of the fires. Many previous studies use the Global Fire Emissions Database (GFED), which is based on the Moderate Resolution Imaging Spectroradiometer (MODIS) burned area product MCD64A1, as an outdoor fires emissions dataset. Correction factors with MODIS active fire detections have previously attempted to account for small fires. We present a new burned area classification algorithm that leverages more frequent MODIS observations (500 m x 500 m) with higher spatial resolution Landsat (30 m x 30 m) observations. Our approach is based on two-tailed Normalized Burn Ratio (NBR) thresholds, abbreviated as ModL2T NBR, and results in an estimated 104 ± 55% higher burned area than GFEDv4.1s (version 4, MCD64A1 + small fires correction) in northwestern India during the 2003-2014 winter (October to November) burning seasons. Regional transport of winter fire emissions affect approximately 63 million people downwind. The general increase in burned area (+37% from 2003-2007 to 2008-2014) over the study period also correlates with increased mechanization (+58% in combine harvester usage from 2001-2002 to 2011-2012). Further, we find strong correlation between ModL2T NBR-derived burned area and results of an independent survey (r = 0.68) and previous studies (r = 0.92). Sources of error arise from small median landholding sizes (1-3 ha), heterogeneous spatial distribution of two dominant burning practices (partial and whole field), coarse spatio-temporal satellite resolution, cloud and haze cover, and limited Landsat scene availability. The burned area estimates of this study can be used to build

  13. Setup accuracy of stereoscopic X-ray positioning with automated correction for rotational errors in patients treated with conformal arc radiotherapy for prostate cancer

    International Nuclear Information System (INIS)

    Soete, Guy; Verellen, Dirk; Tournel, Koen; Storme, Guy

    2006-01-01

    We evaluated setup accuracy of NovalisBody stereoscopic X-ray positioning with automated correction for rotational errors with the Robotics Tilt Module in patients treated with conformal arc radiotherapy for prostate cancer. The correction of rotational errors was shown to reduce random and systematic errors in all directions. (NovalisBody TM and Robotics Tilt Module TM are products of BrainLAB A.G., Heimstetten, Germany)

  14. Challenge and Error: Critical Events and Attention-Related Errors

    Science.gov (United States)

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  15. Volumetric error modeling, identification and compensation based on screw theory for a large multi-axis propeller-measuring machine

    Science.gov (United States)

    Zhong, Xuemin; Liu, Hongqi; Mao, Xinyong; Li, Bin; He, Songping; Peng, Fangyu

    2018-05-01

    Large multi-axis propeller-measuring machines have two types of geometric error, position-independent geometric errors (PIGEs) and position-dependent geometric errors (PDGEs), which both have significant effects on the volumetric error of the measuring tool relative to the worktable. This paper focuses on modeling, identifying and compensating for the volumetric error of the measuring machine. A volumetric error model in the base coordinate system is established based on screw theory considering all the geometric errors. In order to fully identify all the geometric error parameters, a new method for systematic measurement and identification is proposed. All the PIGEs of adjacent axes and the six PDGEs of the linear axes are identified with a laser tracker using the proposed model. Finally, a volumetric error compensation strategy is presented and an inverse kinematic solution for compensation is proposed. The final measuring and compensation experiments have further verified the efficiency and effectiveness of the measuring and identification method, indicating that the method can be used in volumetric error compensation for large machine tools.

  16. Design, performance, and calculated error of a Faraday cup for absolute beam current measurements of 600-MeV protons

    International Nuclear Information System (INIS)

    Beck, S.M.

    1975-04-01

    A mobile self-contained Faraday cup system for beam current measurments of nominal 600-MeV protons was designed, constructed, and used at the NASA Space Radiation Effects Laboratory. The cup is of reentrant design with a length of 106.7 cm and an outside diameter of 20.32 cm. The inner diameter is 15.24 cm and the base thickness is 30.48 cm. The primary absorber is commercially available lead hermetically sealed in a 0.32-cm-thick copper jacket. Several possible systematic errors in using the cup are evaluated. The largest source of error arises from high-energy electrons which are ejected from the entrance window and enter the cup. A total systematic error of -0.83 percent is calculated to be the decrease from the true current value. From data obtained in calibrating helium-filled ion chambers with the Faraday cup, the mean energy required to produce one ion pair in helium is found to be 30.76 +- 0.95 eV for nominal 600-MeV protons. This value agrees well, within experimental error, with reported values of 29.9 eV and 30.2 eV

  17. Design, performance, and calculated error of a Faraday cup for absolute beam current measurements of 600-MeV protons

    International Nuclear Information System (INIS)

    Beck, S.M.

    1975-04-01

    A mobile self-contained Faraday cup system for beam current measurements of nominal 600 MeV protons was designed, constructed, and used at the NASA Space Radiation Effects Laboratory. The cup is of reentrant design with a length of 106.7 cm and an outside diameter of 20.32 cm. The inner diameter is 15.24 cm and the base thickness is 30.48 cm. The primary absorber is commercially available lead hermetically sealed in a 0.32-cm-thick copper jacket. Several possible systematic errors in using the cup are evaluated. The largest source of error arises from high-energy electrons which are ejected from the entrance window and enter the cup. A total systematic error of -0.83 percent is calculated to be the decrease from the true current value. From data obtained in calibrating helium-filled ion chambers with the Faraday cup, the mean energy required to produce one ion pair in helium is found to be 30.76 +- 0.95 eV for nominal 600 MeV protons. This value agrees well, within experimental error, with reported values of 29.9 eV and 30.2 eV. (auth)

  18. Error forecasting schemes of error correction at receiver

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2007-08-01

    To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)

  19. Measurement of plutonium and americium in molten salt residues

    International Nuclear Information System (INIS)

    Haas, F.X.; Lawless, J.L.; Herren, W.E.; Hughes, M.E.

    1979-01-01

    The measurement of plutonium and americium in molten salt residues using a segmented gamma-ray scanning device is described. This system was calibrated using artificially fabricated as well as process generated samples. All samples were calorimetered and the americium to plutonium content of the samples determined by gamma-ray spectroscopy. For the nine samples calorimetered thus far, no significant biases are present in the comparison of the segmented gamma-ray assay and the calorimetric assay. Estimated errors are of the order of 10 percent and is dependent on the americium to plutonium ratio determination

  20. Distribution patterns of firearm discharge residues as revealed by neutron activation analysis

    International Nuclear Information System (INIS)

    Pillay, K.K.S.; Driscoll, D.C.; Jester, W.A.

    1975-01-01

    A systematic investigation using a variety of handguns has revealed the existence of distinguisable distribution patterns of firearm discharge residues on surfaces below the flight path of a bullet. The residues are identificable even at distances of 12 meters from the gun using nondestructive neutron activation analysis. The results of these investigations show that the distribution pattern for a gun is reproducible using similar ammunition and that there exist two distinct regions to the patterns developed between the firearm and the target-one with respect to the position of the gun and the other in the vicinity of the target. The judicious applications of these findings could be of significant value in criminal investigations. (T.G.)