WorldWideScience

Sample records for systematic errors limits

  1. Systematic Procedural Error

    National Research Council Canada - National Science Library

    Byrne, Michael D

    2006-01-01

    .... This problem has received surprisingly little attention from cognitive psychologists. The research summarized here examines such errors in some detail both empirically and through computational cognitive modeling...

  2. Analysis and mitigation of systematic errors in spectral shearing interferometry of pulses approaching the single-cycle limit [Invited

    International Nuclear Information System (INIS)

    Birge, Jonathan R.; Kaertner, Franz X.

    2008-01-01

    We derive an analytical approximation for the measured pulse width error in spectral shearing methods, such as spectral phase interferometry for direct electric-field reconstruction (SPIDER), caused by an anomalous delay between the two sheared pulse components. This analysis suggests that, as pulses approach the single-cycle limit, the resulting requirements on the calibration and stability of this delay become significant, requiring precision orders of magnitude higher than the scale of a wavelength. This is demonstrated by numerical simulations of SPIDER pulse reconstruction using actual data from a sub-two-cycle laser. We briefly propose methods to minimize the effects of this sensitivity in SPIDER and review variants of spectral shearing that attempt to avoid this difficulty

  3. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  4. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  5. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  6. Random and Systematic Errors Share in Total Error of Probes for CNC Machine Tools

    Directory of Open Access Journals (Sweden)

    Adam Wozniak

    2018-03-01

    Full Text Available Probes for CNC machine tools, as every measurement device, have accuracy limited by random errors and by systematic errors. Random errors of these probes are described by a parameter called unidirectional repeatability. Manufacturers of probes for CNC machine tools usually specify only this parameter, while parameters describing systematic errors of the probes, such as pre-travel variation or triggering radius variation, are used rarely. Systematic errors of the probes, linked to the differences in pre-travel values for different measurement directions, can be corrected or compensated, but it is not a widely used procedure. In this paper, the share of systematic errors and random errors in total error of exemplary probes are determined. In the case of simple, kinematic probes, systematic errors are much greater than random errors, so compensation would significantly reduce the probing error. Moreover, it shows that in the case of kinematic probes commonly specified unidirectional repeatability is significantly better than 2D performance. However, in the case of more precise strain-gauge probe systematic errors are of the same order as random errors, which means that errors correction or compensation, in this case, would not yield any significant benefits.

  7. Systematic sampling with errors in sample locations

    DEFF Research Database (Denmark)

    Ziegel, Johanna; Baddeley, Adrian; Dorph-Petersen, Karl-Anton

    2010-01-01

    analysis using point process methods. We then analyze three different models for the error process, calculate exact expressions for the variances, and derive asymptotic variances. Errors in the placement of sample points can lead to substantial inflation of the variance, dampening of zitterbewegung......Systematic sampling of points in continuous space is widely used in microscopy and spatial surveys. Classical theory provides asymptotic expressions for the variance of estimators based on systematic sampling as the grid spacing decreases. However, the classical theory assumes that the sample grid...... is exactly periodic; real physical sampling procedures may introduce errors in the placement of the sample points. This paper studies the effect of errors in sample positioning on the variance of estimators in the case of one-dimensional systematic sampling. First we sketch a general approach to variance...

  8. RHIC susceptibility to variations in systematic magnetic harmonic errors

    International Nuclear Information System (INIS)

    Dell, G.F.; Peggs, S.; Pilat, F.; Satogata, T.; Tepikian, S.; Trbojevic, D.; Wei, J.

    1994-01-01

    Results of a study to determine the sensitivity of tune to uncertainties of the systematic magnetic harmonic errors in the 8 cm dipoles of RHIC are reported. Tolerances specified to the manufacturer for tooling and fabrication can result in systematic harmonics different from the expected values. Limits on the range of systematic harmonics have been established from magnet calculations, and the impact on tune from such harmonics has been established

  9. Evaluation of Data with Systematic Errors

    International Nuclear Information System (INIS)

    Froehner, F. H.

    2003-01-01

    Application-oriented evaluated nuclear data libraries such as ENDF and JEFF contain not only recommended values but also uncertainty information in the form of 'covariance' or 'error files'. These can neither be constructed nor utilized properly without a thorough understanding of uncertainties and correlations. It is shown how incomplete information about errors is described by multivariate probability distributions or, more summarily, by covariance matrices, and how correlations are caused by incompletely known common errors. Parameter estimation for the practically most important case of the Gaussian distribution with common errors is developed in close analogy to the more familiar case without. The formalism shows that, contrary to widespread belief, common ('systematic') and uncorrelated ('random' or 'statistical') errors are to be added in quadrature. It also shows explicitly that repetition of a measurement reduces mainly the statistical uncertainties but not the systematic ones. While statistical uncertainties are readily estimated from the scatter of repeatedly measured data, systematic uncertainties can only be inferred from prior information about common errors and their propagation. The optimal way to handle error-affected auxiliary quantities ('nuisance parameters') in data fitting and parameter estimation is to adjust them on the same footing as the parameters of interest and to integrate (marginalize) them out of the joint posterior distribution afterward

  10. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Learning mechanisms to limit medication administration errors.

    Science.gov (United States)

    Drach-Zahavy, Anat; Pud, Dorit

    2010-04-01

    This paper is a report of a study conducted to identify and test the effectiveness of learning mechanisms applied by the nursing staff of hospital wards as a means of limiting medication administration errors. Since the influential report ;To Err Is Human', research has emphasized the role of team learning in reducing medication administration errors. Nevertheless, little is known about the mechanisms underlying team learning. Thirty-two hospital wards were randomly recruited. Data were collected during 2006 in Israel by a multi-method (observations, interviews and administrative data), multi-source (head nurses, bedside nurses) approach. Medication administration error was defined as any deviation from procedures, policies and/or best practices for medication administration, and was identified using semi-structured observations of nurses administering medication. Organizational learning was measured using semi-structured interviews with head nurses, and the previous year's reported medication administration errors were assessed using administrative data. The interview data revealed four learning mechanism patterns employed in an attempt to learn from medication administration errors: integrated, non-integrated, supervisory and patchy learning. Regression analysis results demonstrated that whereas the integrated pattern of learning mechanisms was associated with decreased errors, the non-integrated pattern was associated with increased errors. Supervisory and patchy learning mechanisms were not associated with errors. Superior learning mechanisms are those that represent the whole cycle of team learning, are enacted by nurses who administer medications to patients, and emphasize a system approach to data analysis instead of analysis of individual cases.

  12. Systematic Review of Errors in Inhaler Use

    DEFF Research Database (Denmark)

    Sanchis, Joaquin; Gich, Ignasi; Pedersen, Søren

    2016-01-01

    in these outcomes over these 40 years and when partitioned into years 1 to 20 and years 21 to 40. Analyses were conducted in accordance with recommendations from Preferred Reporting Items for Systematic Reviews and Meta-Analyses and Strengthening the Reporting of Observational Studies in Epidemiology. Results Data...... A systematic search for articles reporting direct observation of inhaler technique by trained personnel covered the period from 1975 to 2014. Outcomes were the nature and frequencies of the three most common errors; the percentage of patients demonstrating correct, acceptable, or poor technique; and variations...

  13. Investigation of systematic errors of metastable "atomic pair" number

    CERN Document Server

    Yazkov, V

    2015-01-01

    Sources of systematic errors in analysis of data, collected in 2012, are analysed. Esti- mations of systematic errors in a number of “atomic pairs” fr om metastable π + π − atoms are presented.

  14. Tropical systematic and random error energetics based on NCEP ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    Systematic error growth rate peak is observed at wavenumber 2 up to 4-day forecast then .... the influence of summer systematic error and ran- ... total exchange. When the error energy budgets are examined in spectral domain, one may ask ques- tions on the error growth at a certain wavenum- ber from its interaction with ...

  15. SHERPA: A systematic human error reduction and prediction approach

    International Nuclear Information System (INIS)

    Embrey, D.E.

    1986-01-01

    This paper describes a Systematic Human Error Reduction and Prediction Approach (SHERPA) which is intended to provide guidelines for human error reduction and quantification in a wide range of human-machine systems. The approach utilizes as its basic current cognitive models of human performance. The first module in SHERPA performs task and human error analyses, which identify likely error modes, together with guidelines for the reduction of these errors by training, procedures and equipment redesign. The second module uses a SARAH approach to quantify the probability of occurrence of the errors identified earlier, and provides cost benefit analyses to assist in choosing the appropriate error reduction approaches in the third module

  16. Identifying systematic DFT errors in catalytic reactions

    DEFF Research Database (Denmark)

    Christensen, Rune; Hansen, Heine Anton; Vegge, Tejs

    2015-01-01

    Using CO2 reduction reactions as examples, we present a widely applicable method for identifying the main source of errors in density functional theory (DFT) calculations. The method has broad applications for error correction in DFT calculations in general, as it relies on the dependence...... of the applied exchange–correlation functional on the reaction energies rather than on errors versus the experimental data. As a result, improved energy corrections can now be determined for both gas phase and adsorbed reaction species, particularly interesting within heterogeneous catalysis. We show...... that for the CO2 reduction reactions, the main source of error is associated with the C[double bond, length as m-dash]O bonds and not the typically energy corrected OCO backbone....

  17. Systematic errors in VLF direction-finding of whistler ducts

    International Nuclear Information System (INIS)

    Strangeways, H.J.; Rycroft, M.J.

    1980-01-01

    In the previous paper it was shown that the systematic error in the azimuthal bearing due to multipath propagation and incident wave polarisation (when this also constitutes an error) was given by only three different forms for all VLF direction-finders currently used to investigate the position of whistler ducts. In this paper the magnitude of this error is investigated for different ionospheric and ground parameters for these three different systematic error types. By incorporating an ionosphere for which the refractive index is given by the full Appleton-Hartree formula, the variation of the systematic error with ionospheric electron density and latitude and direction of propagation is investigated in addition to the variation with wave frequency, ground conductivity and dielectric constant and distance of propagation. The systematic bearing error is also investigated for the three methods when the azimuthal bearing is averaged over a 2 kHz bandwidth. This is found to lead to a significantly smaller bearing error which, for the crossed-loops goniometer, approximates the bearing error calculated when phase-dependent terms in the receiver response are ignored. (author)

  18. Systematic Errors in Dimensional X-ray Computed Tomography

    DEFF Research Database (Denmark)

    that it is possible to compensate them. In dimensional X-ray computed tomography (CT), many physical quantities influence the final result. However, it is important to know which factors in CT measurements potentially lead to systematic errors. In this talk, typical error sources in dimensional X-ray CT are discussed...

  19. Tackling systematic errors in quantum logic gates with composite rotations

    International Nuclear Information System (INIS)

    Cummins, Holly K.; Llewellyn, Gavin; Jones, Jonathan A.

    2003-01-01

    We describe the use of composite rotations to combat systematic errors in single-qubit quantum logic gates and discuss three families of composite rotations which can be used to correct off-resonance and pulse length errors. Although developed and described within the context of nuclear magnetic resonance quantum computing, these sequences should be applicable to any implementation of quantum computation

  20. ANALYSIS AND CORRECTION OF SYSTEMATIC HEIGHT MODEL ERRORS

    Directory of Open Access Journals (Sweden)

    K. Jacobsen

    2016-06-01

    Full Text Available The geometry of digital height models (DHM determined with optical satellite stereo combinations depends upon the image orientation, influenced by the satellite camera, the system calibration and attitude registration. As standard these days the image orientation is available in form of rational polynomial coefficients (RPC. Usually a bias correction of the RPC based on ground control points is required. In most cases the bias correction requires affine transformation, sometimes only shifts, in image or object space. For some satellites and some cases, as caused by small base length, such an image orientation does not lead to the possible accuracy of height models. As reported e.g. by Yong-hua et al. 2015 and Zhang et al. 2015, especially the Chinese stereo satellite ZiYuan-3 (ZY-3 has a limited calibration accuracy and just an attitude recording of 4 Hz which may not be satisfying. Zhang et al. 2015 tried to improve the attitude based on the color sensor bands of ZY-3, but the color images are not always available as also detailed satellite orientation information. There is a tendency of systematic deformation at a Pléiades tri-stereo combination with small base length. The small base length enlarges small systematic errors to object space. But also in some other satellite stereo combinations systematic height model errors have been detected. The largest influence is the not satisfying leveling of height models, but also low frequency height deformations can be seen. A tilt of the DHM by theory can be eliminated by ground control points (GCP, but often the GCP accuracy and distribution is not optimal, not allowing a correct leveling of the height model. In addition a model deformation at GCP locations may lead to not optimal DHM leveling. Supported by reference height models better accuracy has been reached. As reference height model the Shuttle Radar Topography Mission (SRTM digital surface model (DSM or the new AW3D30 DSM, based on ALOS

  1. Correcting systematic errors in high-sensitivity deuteron polarization measurements

    Science.gov (United States)

    Brantjes, N. P. M.; Dzordzhadze, V.; Gebel, R.; Gonnella, F.; Gray, F. E.; van der Hoek, D. J.; Imig, A.; Kruithof, W. L.; Lazarus, D. M.; Lehrach, A.; Lorentz, B.; Messi, R.; Moricciani, D.; Morse, W. M.; Noid, G. A.; Onderwater, C. J. G.; Özben, C. S.; Prasuhn, D.; Levi Sandri, P.; Semertzidis, Y. K.; da Silva e Silva, M.; Stephenson, E. J.; Stockhorst, H.; Venanzoni, G.; Versolato, O. O.

    2012-02-01

    This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Jülich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10 -5 for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10 -6 in a search for an electric dipole moment using a storage ring.

  2. Sources of variability and systematic error in mouse timing behavior.

    Science.gov (United States)

    Gallistel, C R; King, Adam; McDonald, Robert

    2004-01-01

    In the peak procedure, starts and stops in responding bracket the target time at which food is expected. The variability in start and stop times is proportional to the target time (scalar variability), as is the systematic error in the mean center (scalar error). The authors investigated the source of the error and the variability, using head poking in the mouse, with target intervals of 5 s, 15 s, and 45 s, in the standard procedure, and in a variant with 3 different target intervals at 3 different locations in a single trial. The authors conclude that the systematic error is due to the asymmetric location of start and stop decision criteria, and the scalar variability derives primarily from sources other than memory.

  3. Correcting systematic errors in high-sensitivity deuteron polarization measurements

    Energy Technology Data Exchange (ETDEWEB)

    Brantjes, N.P.M. [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Dzordzhadze, V. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Gebel, R. [Institut fuer Kernphysik, Juelich Center for Hadron Physics, Forschungszentrum Juelich, D-52425 Juelich (Germany); Gonnella, F. [Physica Department of ' Tor Vergata' University, Rome (Italy); INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Gray, F.E. [Regis University, Denver, CO 80221 (United States); Hoek, D.J. van der [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Imig, A. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Kruithof, W.L. [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Lazarus, D.M. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Lehrach, A.; Lorentz, B. [Institut fuer Kernphysik, Juelich Center for Hadron Physics, Forschungszentrum Juelich, D-52425 Juelich (Germany); Messi, R. [Physica Department of ' Tor Vergata' University, Rome (Italy); INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Moricciani, D. [INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Morse, W.M. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Noid, G.A. [Indiana University Cyclotron Facility, Bloomington, IN 47408 (United States); and others

    2012-02-01

    This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Juelich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10{sup -5} for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10{sup -6} in a search for an electric dipole moment using a storage ring.

  4. Auto-calibration of Systematic Odometry Errors in Mobile Robots

    DEFF Research Database (Denmark)

    Bak, Martin; Larsen, Thomas Dall; Andersen, Nils Axel

    1999-01-01

    This paper describes the phenomenon of systematic errors in odometry models in mobile robots and looks at various ways of avoiding it by means of auto-calibration. The systematic errors considered are incorrect knowledge of the wheel base and the gains from encoder readings to wheel displacement....... By auto-calibration we mean a standardized procedure which estimates the uncertainties using only on-board equipment such as encoders, an absolute measurement system and filters; no intervention by operator or off-line data processing is necessary. Results are illustrated by a number of simulations...... and experiments on a mobile robot....

  5. Study of systematic errors in the luminosity measurement

    International Nuclear Information System (INIS)

    Arima, Tatsumi

    1993-01-01

    The experimental systematic error in the barrel region was estimated to be 0.44 %. This value is derived considering the systematic uncertainties from the dominant sources but does not include uncertainties which are being studied. In the end cap region, the study of shower behavior and clustering effect is under way in order to determine the angular resolution at the low angle edge of the Liquid Argon Calorimeter. We also expect that the systematic error in this region will be less than 1 %. The technical precision of theoretical uncertainty is better than 0.1 % comparing the Tobimatsu-Shimizu program and BABAMC modified by ALEPH. To estimate the physical uncertainty we will use the ALIBABA [9] which includes O(α 2 ) QED correction in leading-log approximation. (J.P.N.)

  6. Study of systematic errors in the luminosity measurement

    Energy Technology Data Exchange (ETDEWEB)

    Arima, Tatsumi [Tsukuba Univ., Ibaraki (Japan). Inst. of Applied Physics

    1993-04-01

    The experimental systematic error in the barrel region was estimated to be 0.44 %. This value is derived considering the systematic uncertainties from the dominant sources but does not include uncertainties which are being studied. In the end cap region, the study of shower behavior and clustering effect is under way in order to determine the angular resolution at the low angle edge of the Liquid Argon Calorimeter. We also expect that the systematic error in this region will be less than 1 %. The technical precision of theoretical uncertainty is better than 0.1 % comparing the Tobimatsu-Shimizu program and BABAMC modified by ALEPH. To estimate the physical uncertainty we will use the ALIBABA [9] which includes O({alpha}{sup 2}) QED correction in leading-log approximation. (J.P.N.).

  7. Medication Errors in the Southeast Asian Countries: A Systematic Review.

    Directory of Open Access Journals (Sweden)

    Shahrzad Salmasi

    Full Text Available Medication error (ME is a worldwide issue, but most studies on ME have been undertaken in developed countries and very little is known about ME in Southeast Asian countries. This study aimed systematically to identify and review research done on ME in Southeast Asian countries in order to identify common types of ME and estimate its prevalence in this region.The literature relating to MEs in Southeast Asian countries was systematically reviewed in December 2014 by using; Embase, Medline, Pubmed, ProQuest Central and the CINAHL. Inclusion criteria were studies (in any languages that investigated the incidence and the contributing factors of ME in patients of all ages.The 17 included studies reported data from six of the eleven Southeast Asian countries: five studies in Singapore, four in Malaysia, three in Thailand, three in Vietnam, one in the Philippines and one in Indonesia. There was no data on MEs in Brunei, Laos, Cambodia, Myanmar and Timor. Of the seventeen included studies, eleven measured administration errors, four focused on prescribing errors, three were done on preparation errors, three on dispensing errors and two on transcribing errors. There was only one study of reconciliation error. Three studies were interventional.The most frequently reported types of administration error were incorrect time, omission error and incorrect dose. Staff shortages, and hence heavy workload for nurses, doctor/nurse distraction, and misinterpretation of the prescription/medication chart, were identified as contributing factors of ME. There is a serious lack of studies on this topic in this region which needs to be addressed if the issue of ME is to be fully understood and addressed.

  8. Black hole spectroscopy: Systematic errors and ringdown energy estimates

    Science.gov (United States)

    Baibhav, Vishal; Berti, Emanuele; Cardoso, Vitor; Khanna, Gaurav

    2018-02-01

    The relaxation of a distorted black hole to its final state provides important tests of general relativity within the reach of current and upcoming gravitational wave facilities. In black hole perturbation theory, this phase consists of a simple linear superposition of exponentially damped sinusoids (the quasinormal modes) and of a power-law tail. How many quasinormal modes are necessary to describe waveforms with a prescribed precision? What error do we incur by only including quasinormal modes, and not tails? What other systematic effects are present in current state-of-the-art numerical waveforms? These issues, which are basic to testing fundamental physics with distorted black holes, have hardly been addressed in the literature. We use numerical relativity waveforms and accurate evolutions within black hole perturbation theory to provide some answers. We show that (i) a determination of the fundamental l =m =2 quasinormal frequencies and damping times to within 1% or better requires the inclusion of at least the first overtone, and preferably of the first two or three overtones; (ii) a determination of the black hole mass and spin with precision better than 1% requires the inclusion of at least two quasinormal modes for any given angular harmonic mode (ℓ , m ). We also improve on previous estimates and fits for the ringdown energy radiated in the various multipoles. These results are important to quantify theoretical (as opposed to instrumental) limits in parameter estimation accuracy and tests of general relativity allowed by ringdown measurements with high signal-to-noise ratio gravitational wave detectors.

  9. Systematic Error of Acoustic Particle Image Velocimetry and Its Correction

    Directory of Open Access Journals (Sweden)

    Mickiewicz Witold

    2014-08-01

    Full Text Available Particle Image Velocimetry is getting more and more often the method of choice not only for visualization of turbulent mass flows in fluid mechanics, but also in linear and non-linear acoustics for non-intrusive visualization of acoustic particle velocity. Particle Image Velocimetry with low sampling rate (about 15Hz can be applied to visualize the acoustic field using the acquisition synchronized to the excitation signal. Such phase-locked PIV technique is described and used in experiments presented in the paper. The main goal of research was to propose a model of PIV systematic error due to non-zero time interval between acquisitions of two images of the examined sound field seeded with tracer particles, what affects the measurement of complex acoustic signals. Usefulness of the presented model is confirmed experimentally. The correction procedure, based on the proposed model, applied to measurement data increases the accuracy of acoustic particle velocity field visualization and creates new possibilities in observation of sound fields excited with multi-tonal or band-limited noise signals.

  10. Reducing systematic errors in measurements made by a SQUID magnetometer

    International Nuclear Information System (INIS)

    Kiss, L.F.; Kaptás, D.; Balogh, J.

    2014-01-01

    A simple method is described which reduces those systematic errors of a superconducting quantum interference device (SQUID) magnetometer that arise from possible radial displacements of the sample in the second-order gradiometer superconducting pickup coil. By rotating the sample rod (and hence the sample) around its axis into a position where the best fit is obtained to the output voltage of the SQUID as the sample is moved through the pickup coil, the accuracy of measuring magnetic moments can be increased significantly. In the cases of an examined Co 1.9 Fe 1.1 Si Heusler alloy, pure iron and nickel samples, the accuracy could be increased over the value given in the specification of the device. The suggested method is only meaningful if the measurement uncertainty is dominated by systematic errors – radial displacement in particular – and not by instrumental or environmental noise. - Highlights: • A simple method is described which reduces systematic errors of a SQUID. • The errors arise from a radial displacement of the sample in the gradiometer coil. • The procedure is to rotate the sample rod (with the sample) around its axis. • The best fit to the SQUID voltage has to be attained moving the sample through the coil. • The accuracy of measuring magnetic moment can be increased significantly

  11. Effects of averaging over motion and the resulting systematic errors in radiation therapy

    International Nuclear Information System (INIS)

    Evans, Philip M; Coolens, Catherine; Nioutsikou, Elena

    2006-01-01

    The potential for systematic errors in radiotherapy of a breathing patient is considered using the statistical model of Bortfeld et al (2002 Phys. Med. Biol. 47 2203-20). It is shown that although averaging over 30 fractions does result in a narrow Gaussian distribution of errors, as predicted by the central limit theorem, the fact that one or a few samples of the breathing patient's motion distribution are used for treatment planning (in contrast to the many treatment fractions that are likely to be delivered) may result in a much larger error with a systematic component. The error distribution may be particularly large if a scan at breath-hold is used for planning. (note)

  12. Systematic investigation of SLC final focus tolerances to errors

    International Nuclear Information System (INIS)

    Napoly, O.

    1996-10-01

    In this paper we review the tolerances of the SLC final focus system. To calculate these tolerances we used the error analysis routine of the program FFADA which has been written to aid the design and the analysis of final focus systems for the future linear colliders. This routine, complete by S. Fartoukh, systematically reviews the errors generated by the geometric 6-d Euclidean displacements of each magnet as well as by the field errors (normal and skew) up to the sextipolar order. It calculates their effects on the orbit and the transfer matrix at the second order in the errors, thus including cross-talk between errors originating from two different magnets. It also translates these effects in terms of tolerance derived from spot size growth and luminosity loss. We have run the routine for the following set of beam IP parameters: σ * x = 2.1 μm; σ * x' = 300 μrd; σ * x = 1 mm; σ * y = 0.55 μm; σ * y' = 200 μrd; σ * b = 2 x 10 -3 . The resulting errors and tolerances are displayed in a series of histograms which are reproduced in this paper. (author)

  13. Resolution and systematic limitations in beam based alignment

    Energy Technology Data Exchange (ETDEWEB)

    Tenenbaum, P.G.

    2000-03-15

    Beam based alignment of quadrupoles by variation of quadrupole strength is a widely-used technique in accelerators today. The authors describe the dominant systematic limitation of this technique, which arises from the change in the center position of the quadrupole as the strength is varied, and derive expressions for the resulting error. In addition, the authors derive an expression for the statistical resolution of such techniques in a periodic transport line, given knowledge of the line's transport matrices, the resolution of the beam position monitor system, and the details of the strength variation procedure. These results are applied to the Next Linear Collider main linear accelerator, an 11 kilometer accelerator containing 750 quadrupoles and 5,000 accelerator structures. The authors find that in principle a statistical resolution of 1 micron is easily achievable but the systematic error due to variation of the magnetic centers could be several times larger.

  14. Economic impact of medication error: a systematic review.

    Science.gov (United States)

    Walsh, Elaine K; Hansen, Christina Raae; Sahm, Laura J; Kearney, Patricia M; Doherty, Edel; Bradley, Colin P

    2017-05-01

    Medication error is a significant source of morbidity and mortality among patients. Clinical and cost-effectiveness evidence are required for the implementation of quality of care interventions. Reduction of error-related cost is a key potential benefit of interventions addressing medication error. The aim of this review was to describe and quantify the economic burden associated with medication error. PubMed, Cochrane, Embase, CINAHL, EconLit, ABI/INFORM, Business Source Complete were searched. Studies published 2004-2016 assessing the economic impact of medication error were included. Cost values were expressed in Euro 2015. A narrative synthesis was performed. A total of 4572 articles were identified from database searching, and 16 were included in the review. One study met all applicable quality criteria. Fifteen studies expressed economic impact in monetary terms. Mean cost per error per study ranged from €2.58 to €111 727.08. Healthcare costs were used to measure economic impact in 15 of the included studies with one study measuring litigation costs. Four studies included costs incurred in primary care with the remaining 12 measuring hospital costs. Five studies looked at general medication error in a general population with 11 studies reporting the economic impact of an individual type of medication error or error within a specific patient population. Considerable variability existed between studies in terms of financial cost, patients, settings and errors included. Many were of poor quality. Assessment of economic impact was conducted predominantly in the hospital setting with little assessment of primary care impact. Limited parameters were used to establish economic impact. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  15. Physical predictions from lattice QCD. Reducing systematic errors

    International Nuclear Information System (INIS)

    Pittori, C.

    1994-01-01

    Some recent developments in the theoretical understanding of lattice quantum chromodynamics and of its possible sources of systematic errors are reported, and a review of some of the latest Monte Carlo results for light quarks phenomenology is presented. A very general introduction on a quantum field theory on a discrete spacetime lattice is given, and the Monte Carlo methods which allow to compute many interesting physical quantities in the non-perturbative domain of strong interactions, is illustrated. (author). 17 refs., 3 figs., 3 tabs

  16. User interface for MAWST limit of error program

    International Nuclear Information System (INIS)

    Crain, B. Jr.

    1991-01-01

    This paper reports on a user-friendly interface which is being developed to aid in preparation of input data for the Los Alamos National Laboratory software module MAWST (Materials Accounting With Sequential Testing) used at Savannah River Site to propagate limits of error for facility material balances. The forms-based interface is being designed using traditional software project management tools and using the Ingres family of database management and application development products (products of Relational Technology, Inc.). The software will run on VAX computers (products of Digital Equipment Corporation) on which the VMS operating system and Ingres database management software are installed. Use of the interface software will reduce time required to prepare input data for calculations and also reduce errors associated with data preparation

  17. Random and systematic beam modulator errors in dynamic intensity modulated radiotherapy

    International Nuclear Information System (INIS)

    Parsai, Homayon; Cho, Paul S; Phillips, Mark H; Giansiracusa, Robert S; Axen, David

    2003-01-01

    This paper reports on the dosimetric effects of random and systematic modulator errors in delivery of dynamic intensity modulated beams. A sliding-widow type delivery that utilizes a combination of multileaf collimators (MLCs) and backup diaphragms was examined. Gaussian functions with standard deviations ranging from 0.5 to 1.5 mm were used to simulate random positioning errors. A clinical example involving a clival meningioma was chosen with optic chiasm and brain stem as limiting critical structures in the vicinity of the tumour. Dose calculations for different modulator fluctuations were performed, and a quantitative analysis was carried out based on cumulative and differential dose volume histograms for the gross target volume and surrounding critical structures. The study indicated that random modulator errors have a strong tendency to reduce minimum target dose and homogeneity. Furthermore, it was shown that random perturbation of both MLCs and backup diaphragms in the order of σ = 1 mm can lead to 5% errors in prescribed dose. In comparison, when MLCs or backup diaphragms alone was perturbed, the system was more robust and modulator errors of at least σ = 1.5 mm were required to cause dose discrepancies greater than 5%. For systematic perturbation, even errors in the order of ±0.5 mm were shown to result in significant dosimetric deviations

  18. Error evaluation method for material accountancy measurement. Evaluation of random and systematic errors based on material accountancy data

    International Nuclear Information System (INIS)

    Nidaira, Kazuo

    2008-01-01

    International Target Values (ITV) shows random and systematic measurement uncertainty components as a reference for routinely achievable measurement quality in the accountancy measurement. The measurement uncertainty, called error henceforth, needs to be periodically evaluated and checked against ITV for consistency as the error varies according to measurement methods, instruments, operators, certified reference samples, frequency of calibration, and so on. In the paper an error evaluation method was developed with focuses on (1) Specifying clearly error calculation model, (2) Getting always positive random and systematic error variances, (3) Obtaining probability density distribution of an error variance and (4) Confirming the evaluation method by simulation. In addition the method was demonstrated by applying real data. (author)

  19. Systematic literature review of hospital medication administration errors in children

    Directory of Open Access Journals (Sweden)

    Ameer A

    2015-11-01

    Full Text Available Ahmed Ameer,1 Soraya Dhillon,1 Mark J Peters,2 Maisoon Ghaleb11Department of Pharmacy, School of Life and Medical Sciences, University of Hertfordshire, Hatfield, UK; 2Paediatric Intensive Care Unit, Great Ormond Street Hospital, London, UK Objective: Medication administration is the last step in the medication process. It can act as a safety net to prevent unintended harm to patients if detected. However, medication administration errors (MAEs during this process have been documented and thought to be preventable. In pediatric medicine, doses are usually administered based on the child's weight or body surface area. This in turn increases the risk of drug miscalculations and therefore MAEs. The aim of this review is to report MAEs occurring in pediatric inpatients. Methods: Twelve bibliographic databases were searched for studies published between January 2000 and February 2015 using “medication administration errors”, “hospital”, and “children” related terminologies. Handsearching of relevant publications was also carried out. A second reviewer screened articles for eligibility and quality in accordance with the inclusion/exclusion criteria. Key findings: A total of 44 studies were systematically reviewed. MAEs were generally defined as a deviation of dose given from that prescribed; this included omitted doses and administration at the wrong time. Hospital MAEs in children accounted for a mean of 50% of all reported medication error reports (n=12,588. It was also identified in a mean of 29% of doses observed (n=8,894. The most prevalent type of MAEs related to preparation, infusion rate, dose, and time. This review has identified five types of interventions to reduce hospital MAEs in children: barcode medicine administration, electronic prescribing, education, use of smart pumps, and standard concentration. Conclusion: This review has identified a wide variation in the prevalence of hospital MAEs in children. This is attributed to

  20. Peak-counts blood flow model-errors and limitations

    International Nuclear Information System (INIS)

    Mullani, N.A.; Marani, S.K.; Ekas, R.D.; Gould, K.L.

    1984-01-01

    The peak-counts model has several advantages, but its use may be limited due to the condition that the venous egress may not be negligible at the time of peak-counts. Consequently, blood flow measurements by the peak-counts model will depend on the bolus size, bolus duration, and the minimum transit time of the bolus through the region of interest. The effect of bolus size on the measurement of extraction fraction and blood flow was evaluated by injecting 1 to 30ml of rubidium chloride in the femoral vein of a dog and measuring the myocardial activity with a beta probe over the heart. Regional blood flow measurements were not found to vary with bolus sizes up to 30ml. The effect of bolus duration was studied by injecting a 10cc bolus of tracer at different speeds in the femoral vein of a dog. All intravenous injections undergo a broadening of the bolus duration due to the transit time of the tracer through the lungs and the heart. This transit time was found to range from 4-6 second FWHM and dominates the duration of the bolus to the myocardium for up to 3 second injections. A computer simulation has been carried out in which the different parameters of delay time, extraction fraction, and bolus duration can be changed to assess the errors in the peak-counts model. The results of the simulations show that the error will be greatest for short transit time delays and for low extraction fractions

  1. ILRS Activities in Monitoring Systematic Errors in SLR Data

    Science.gov (United States)

    Pavlis, E. C.; Luceri, V.; Kuzmicz-Cieslak, M.; Bianco, G.

    2017-12-01

    The International Laser Ranging Service (ILRS) contributes to ITRF development unique information that only Satellite Laser Ranging—SLR is sensitive to: the definition of the origin, and in equal parts with VLBI, the scale of the model. For the development of ITRF2014, the ILRS analysts adopted a revision of the internal standards and procedures in generating our contribution from the eight ILRS Analysis Centers. The improved results for the ILRS components were reflected in the resulting new time series of the ITRF origin and scale, showing insignificant trends and tighter scatter. This effort was further extended after the release of ITRF2014, with the execution of a Pilot Project (PP) in the 2016-2017 timeframe that demonstrated the robust estimation of persistent systematic errors at the millimeter level. ILRS ASC is now turning this into an operational tool to monitor station performance and to generate a history of systematics at each station, to be used with each re-analysis for future ITRF model developments. This is part of a broader ILRS effort to improve the quality control of the data collection process as well as that of our products. To this end, the ILRS has established a "Quality Control Board—QCB" that comprises of members from the analysis and engineering groups, the Central Bureau, and even user groups with special interests. The QCB meets by telecon monthly and oversees the various ongoing projects, develops ideas for new tools and future products. This presentation will focus on the main topic with an update on the results so far, the schedule for the near future and its operational implementation, along with a brief description of upcoming new ILRS products.

  2. IceCube systematic errors investigation: Simulation of the ice

    Energy Technology Data Exchange (ETDEWEB)

    Resconi, Elisa; Wolf, Martin [Max-Planck-Institute for Nuclear Physics, Heidelberg (Germany); Schukraft, Anne [RWTH, Aachen University (Germany)

    2010-07-01

    IceCube is a neutrino observatory for astroparticle and astronomy research at the South Pole. It uses one cubic kilometer of Antartica's deepest ice (1500 m-2500 m in depth) to detect Cherenkov light, generated by charged particles traveling through the ice, with an array of phototubes encapsulated in glass pressure spheres. The arrival time as well as the charge deposited of the detected photons represent the base measurements that are used for track and energy reconstruction of those charged particles. The optical properties of the deep antarctic ice vary from layer to layer. Measurements of the ice properties and their correct modeling in Monte Carlo simulation is then of primary importance for the correct understanding of the IceCube telescope behavior. After a short summary about the different methods to investigate the ice properties and to calibrate the detector, we show how the simulation obtained by using this information compares to the measured data and how systematic errors due to uncertain ice properties are determined in IceCube.

  3. Causes of medication administration errors in hospitals: a systematic review of quantitative and qualitative evidence.

    Science.gov (United States)

    Keers, Richard N; Williams, Steven D; Cooke, Jonathan; Ashcroft, Darren M

    2013-11-01

    ), patient factors (availability, acuity), staff health status (fatigue, stress) and interruptions/distractions during drug administration. Few studies sought to determine the causes of intravenous MAEs. A number of latent pathway conditions were less well explored, including local working culture and high-level managerial decisions. Causes were often described superficially; this may be related to the use of quantitative surveys and observation methods in many studies, limited use of established error causation frameworks to analyse data and a predominant focus on issues other than the causes of MAEs among studies. As only English language publications were included, some relevant studies may have been missed. Limited evidence from studies included in this systematic review suggests that MAEs are influenced by multiple systems factors, but if and how these arise and interconnect to lead to errors remains to be fully determined. Further research with a theoretical focus is needed to investigate the MAE causation pathway, with an emphasis on ensuring interventions designed to minimise MAEs target recognised underlying causes of errors to maximise their impact.

  4. On the Source of the Systematic Errors in the Quatum Mechanical Calculation of the Superheavy Elements

    Directory of Open Access Journals (Sweden)

    Khazan A.

    2010-10-01

    Full Text Available It is shown that only the hyperbolic law of the Periodic Table of Elements allows the exact calculation for the atomic masses. The reference data of Periods 8 and 9 manifest a systematic error in the computer software applied to such a calculation (this systematic error increases with the number of the elements in the Table.

  5. On the Source of the Systematic Errors in the Quantum Mechanical Calculation of the Superheavy Elements

    Directory of Open Access Journals (Sweden)

    Khazan A.

    2010-10-01

    Full Text Available It is shown that only the hyperbolic law of the Periodic Table of Elements allows the exact calculation for the atomic masses. The reference data of Periods 8 and 9 manifest a systematic error in the computer software applied to such a calculation (this systematic error increases with the number of the elements in the Table.

  6. Assessment of the uncertainty associated with systematic errors in digital instruments: an experimental study on offset errors

    International Nuclear Information System (INIS)

    Attivissimo, F; Giaquinto, N; Savino, M; Cataldo, A

    2012-01-01

    This paper deals with the assessment of the uncertainty due to systematic errors, particularly in A/D conversion-based instruments. The problem of defining and assessing systematic errors is briefly discussed, and the conceptual scheme of gauge repeatability and reproducibility is adopted. A practical example regarding the evaluation of the uncertainty caused by the systematic offset error is presented. The experimental results, obtained under various ambient conditions, show that modelling the variability of systematic errors is more problematic than suggested by the ISO 5725 norm. Additionally, the paper demonstrates the substantial difference between the type B uncertainty evaluation, obtained via the maximum entropy principle applied to manufacturer's specifications, and the type A (experimental) uncertainty evaluation, which reflects actually observable reality. Although it is reasonable to assume a uniform distribution of the offset error, experiments demonstrate that the distribution is not centred and that a correction must be applied. In such a context, this work motivates a more pragmatic and experimental approach to uncertainty, with respect to the directions of supplement 1 of GUM. (paper)

  7. Benefits and risks of using smart pumps to reduce medication error rates: a systematic review.

    Science.gov (United States)

    Ohashi, Kumiko; Dalleur, Olivia; Dykes, Patricia C; Bates, David W

    2014-12-01

    Smart infusion pumps have been introduced to prevent medication errors and have been widely adopted nationally in the USA, though they are not always used in Europe or other regions. Despite widespread usage of smart pumps, intravenous medication errors have not been fully eliminated. Through a systematic review of recent studies and reports regarding smart pump implementation and use, we aimed to identify the impact of smart pumps on error reduction and on the complex process of medication administration, and strategies to maximize the benefits of smart pumps. The medical literature related to the effects of smart pumps for improving patient safety was searched in PUBMED, EMBASE, and the Cochrane Central Register of Controlled Trials (CENTRAL) (2000-2014) and relevant papers were selected by two researchers. After the literature search, 231 papers were identified and the full texts of 138 articles were assessed for eligibility. Of these, 22 were included after removal of papers that did not meet the inclusion criteria. We assessed both the benefits and negative effects of smart pumps from these studies. One of the benefits of using smart pumps was intercepting errors such as the wrong rate, wrong dose, and pump setting errors. Other benefits include reduction of adverse drug event rates, practice improvements, and cost effectiveness. Meanwhile, the current issues or negative effects related to using smart pumps were lower compliance rates of using smart pumps, the overriding of soft alerts, non-intercepted errors, or the possibility of using the wrong drug library. The literature suggests that smart pumps reduce but do not eliminate programming errors. Although the hard limits of a drug library play a main role in intercepting medication errors, soft limits were still not as effective as hard limits because of high override rates. Compliance in using smart pumps is key towards effectively preventing errors. Opportunities for improvement include upgrading drug

  8. Seeing your error alters my pointing: observing systematic pointing errors induces sensori-motor after-effects.

    Directory of Open Access Journals (Sweden)

    Roberta Ronchi

    Full Text Available During the procedure of prism adaptation, subjects execute pointing movements to visual targets under a lateral optical displacement: as consequence of the discrepancy between visual and proprioceptive inputs, their visuo-motor activity is characterized by pointing errors. The perception of such final errors triggers error-correction processes that eventually result into sensori-motor compensation, opposite to the prismatic displacement (i.e., after-effects. Here we tested whether the mere observation of erroneous pointing movements, similar to those executed during prism adaptation, is sufficient to produce adaptation-like after-effects. Neurotypical participants observed, from a first-person perspective, the examiner's arm making incorrect pointing movements that systematically overshot visual targets location to the right, thus simulating a rightward optical deviation. Three classical after-effect measures (proprioceptive, visual and visual-proprioceptive shift were recorded before and after first-person's perspective observation of pointing errors. Results showed that mere visual exposure to an arm that systematically points on the right-side of a target (i.e., without error correction produces a leftward after-effect, which mostly affects the observer's proprioceptive estimation of her body midline. In addition, being exposed to such a constant visual error induced in the observer the illusion "to feel" the seen movement. These findings indicate that it is possible to elicit sensori-motor after-effects by mere observation of movement errors.

  9. The systematic and random errors determination using realtime 3D surface tracking system in breast cancer

    International Nuclear Information System (INIS)

    Kanphet, J; Suriyapee, S; Sanghangthum, T; Kumkhwao, J; Wisetrintong, M; Dumrongkijudom, N

    2016-01-01

    The purpose of this study to determine the patient setup uncertainties in deep inspiration breath-hold (DIBH) radiation therapy for left breast cancer patients using real-time 3D surface tracking system. The six breast cancer patients treated by 6 MV photon beams from TrueBeam linear accelerator were selected. The patient setup errors and motion during treatment were observed and calculated for interfraction and intrafraction motions. The systematic and random errors were calculated in vertical, longitudinal and lateral directions. From 180 images tracking before and during treatment, the maximum systematic error of interfraction and intrafraction motions were 0.56 mm and 0.23 mm, the maximum random error of interfraction and intrafraction motions were 1.18 mm and 0.53 mm, respectively. The interfraction was more pronounce than the intrafraction, while the systematic error was less impact than random error. In conclusion the intrafraction motion error from patient setup uncertainty is about half of interfraction motion error, which is less impact due to the stability in organ movement from DIBH. The systematic reproducibility is also half of random error because of the high efficiency of modern linac machine that can reduce the systematic uncertainty effectively, while the random errors is uncontrollable. (paper)

  10. Numerical study of the systematic error in Monte Carlo schemes for semiconductors

    Energy Technology Data Exchange (ETDEWEB)

    Muscato, Orazio [Univ. degli Studi di Catania (Italy). Dipt. di Matematica e Informatica; Di Stefano, Vincenza [Univ. degli Studi di Messina (Italy). Dipt. di Matematica; Wagner, Wolfgang [Weierstrass-Institut fuer Angewandte Analysis und Stochastik (WIAS) im Forschungsverbund Berlin e.V. (Germany)

    2008-07-01

    The paper studies the convergence behavior of Monte Carlo schemes for semiconductors. A detailed analysis of the systematic error with respect to numerical parameters is performed. Different sources of systematic error are pointed out and illustrated in a spatially one-dimensional test case. The error with respect to the number of simulation particles occurs during the calculation of the internal electric field. The time step error, which is related to the splitting of transport and electric field calculations, vanishes sufficiently fast. The error due to the approximation of the trajectories of particles depends on the ODE solver used in the algorithm. It is negligible compared to the other sources of time step error, when a second order Runge-Kutta solver is used. The error related to the approximate scattering mechanism is the most significant source of error with respect to the time step. (orig.)

  11. Combined Coding And Modulation Using Runlength Limited Error ...

    African Journals Online (AJOL)

    In this paper we propose a Combined Coding and Modulation (CCM) scheme employing RLL/ECCs and MPSK modulation as well as RLL/ECC codes and BFSK/MPSK modulation with a view to optimise on channel bandwidth. The CCM codes and their trellis are designed and their error performances simulated in AWGN ...

  12. The quality of systematic reviews about interventions for refractive error can be improved: a review of systematic reviews.

    Science.gov (United States)

    Mayo-Wilson, Evan; Ng, Sueko Matsumura; Chuck, Roy S; Li, Tianjing

    2017-09-05

    Systematic reviews should inform American Academy of Ophthalmology (AAO) Preferred Practice Pattern® (PPP) guidelines. The quality of systematic reviews related to the forthcoming Preferred Practice Pattern® guideline (PPP) Refractive Errors & Refractive Surgery is unknown. We sought to identify reliable systematic reviews to assist the AAO Refractive Errors & Refractive Surgery PPP. Systematic reviews were eligible if they evaluated the effectiveness or safety of interventions included in the 2012 PPP Refractive Errors & Refractive Surgery. To identify potentially eligible systematic reviews, we searched the Cochrane Eyes and Vision United States Satellite database of systematic reviews. Two authors identified eligible reviews and abstracted information about the characteristics and quality of the reviews independently using the Systematic Review Data Repository. We classified systematic reviews as "reliable" when they (1) defined criteria for the selection of studies, (2) conducted comprehensive literature searches for eligible studies, (3) assessed the methodological quality (risk of bias) of the included studies, (4) used appropriate methods for meta-analyses (which we assessed only when meta-analyses were reported), (5) presented conclusions that were supported by the evidence provided in the review. We identified 124 systematic reviews related to refractive error; 39 met our eligibility criteria, of which we classified 11 to be reliable. Systematic reviews classified as unreliable did not define the criteria for selecting studies (5; 13%), did not assess methodological rigor (10; 26%), did not conduct comprehensive searches (17; 44%), or used inappropriate quantitative methods (3; 8%). The 11 reliable reviews were published between 2002 and 2016. They included 0 to 23 studies (median = 9) and analyzed 0 to 4696 participants (median = 666). Seven reliable reviews (64%) assessed surgical interventions. Most systematic reviews of interventions for

  13. Investigation into the limitations of straightness interferometers using a multisensor-based error separation method

    Science.gov (United States)

    Weichert, Christoph; Köchert, Paul; Schötka, Eugen; Flügge, Jens; Manske, Eberhard

    2018-06-01

    The uncertainty of a straightness interferometer is independent of the component used to introduce the divergence angle between the two probing beams, and is limited by three main error sources, which are linked to each other: their resolution, the influence of refractive index gradients and the topography of the straightness reflector. To identify the configuration with minimal uncertainties under laboratory conditions, a fully fibre-coupled heterodyne interferometer was successively equipped with three different wedge prisms, resulting in three different divergence angles (4°, 8° and 20°). To separate the error sources an independent reference with a smaller reproducibility is needed. Therefore, the straightness measurement capability of the Nanometer Comparator, based on a multisensor error separation method, was improved to provide measurements with a reproducibility of 0.2 nm. The comparison results revealed that the influence of the refractive index gradients of air did not increase with interspaces between the probing beams of more than 11.3 mm. Therefore, over a movement range of 220 mm, the lowest uncertainty was achieved with the largest divergence angle. The dominant uncertainty contribution arose from the mirror topography, which was additionally determined with a Fizeau interferometer. The measured topography agreed within  ±1.3 nm with the systematic deviations revealed in the straightness comparison, resulting in an uncertainty contribution of 2.6 nm for the straightness interferometer.

  14. Multi-isocenter stereotactic radiotherapy: implications for target dose distributions of systematic and random localization errors

    International Nuclear Information System (INIS)

    Ebert, M.A.; Zavgorodni, S.F.; Kendrick, L.A.; Weston, S.; Harper, C.S.

    2001-01-01

    Purpose: This investigation examined the effect of alignment and localization errors on dose distributions in stereotactic radiotherapy (SRT) with arced circular fields. In particular, it was desired to determine the effect of systematic and random localization errors on multi-isocenter treatments. Methods and Materials: A research version of the FastPlan system from Surgical Navigation Technologies was used to generate a series of SRT plans of varying complexity. These plans were used to examine the influence of random setup errors by recalculating dose distributions with successive setup errors convolved into the off-axis ratio data tables used in the dose calculation. The influence of systematic errors was investigated by displacing isocenters from their planned positions. Results: For single-isocenter plans, it is found that the influences of setup error are strongly dependent on the size of the target volume, with minimum doses decreasing most significantly with increasing random and systematic alignment error. For multi-isocenter plans, similar variations in target dose are encountered, with this result benefiting from the conventional method of prescribing to a lower isodose value for multi-isocenter treatments relative to single-isocenter treatments. Conclusions: It is recommended that the systematic errors associated with target localization in SRT be tracked via a thorough quality assurance program, and that random setup errors be minimized by use of a sufficiently robust relocation system. These errors should also be accounted for by incorporating corrections into the treatment planning algorithm or, alternatively, by inclusion of sufficient margins in target definition

  15. Modeling systematic errors: polychromatic sources of Beer-Lambert deviations in HPLC/UV and nonchromatographic spectrophotometric assays.

    Science.gov (United States)

    Galli, C

    2001-07-01

    It is well established that the use of polychromatic radiation in spectrophotometric assays leads to excursions from the Beer-Lambert limit. This Note models the resulting systematic error as a function of assay spectral width, slope of molecular extinction coefficient, and analyte concentration. The theoretical calculations are compared with recent experimental results; a parameter is introduced which can be used to estimate the magnitude of the systematic error in both chromatographic and nonchromatographic spectrophotometric assays. It is important to realize that the polychromatic radiation employed in common laboratory equipment can yield assay errors up to approximately 4%, even at absorption levels generally considered 'safe' (i.e. absorption <1). Thus careful consideration of instrumental spectral width, analyte concentration, and slope of molecular extinction coefficient is required to ensure robust analytical methods.

  16. Analysis of possible systematic errors in the Oslo method

    International Nuclear Information System (INIS)

    Larsen, A. C.; Guttormsen, M.; Buerger, A.; Goergen, A.; Nyhus, H. T.; Rekstad, J.; Siem, S.; Toft, H. K.; Tveten, G. M.; Wikan, K.; Krticka, M.; Betak, E.; Schiller, A.; Voinov, A. V.

    2011-01-01

    In this work, we have reviewed the Oslo method, which enables the simultaneous extraction of the level density and γ-ray transmission coefficient from a set of particle-γ coincidence data. Possible errors and uncertainties have been investigated. Typical data sets from various mass regions as well as simulated data have been tested against the assumptions behind the data analysis.

  17. Tolerable systematic errors in Really Large Hadron Collider dipoles

    International Nuclear Information System (INIS)

    Peggs, S.; Dell, F.

    1996-01-01

    Maximum allowable systematic harmonics for arc dipoles in a Really Large Hadron Collider are derived. The possibility of half cell lengths much greater than 100 meters is justified. A convenient analytical model evaluating horizontal tune shifts is developed, and tested against a sample high field collider

  18. Correcting systematic errors in high-sensitivity deuteron polarization measurements

    NARCIS (Netherlands)

    Brantjes, N. P. M.; Dzordzhadze, V.; Gebel, R.; Gonnella, F.; Gray, F. E.; van der Hoek, D. J.; Imig, A.; Kruithof, W. L.; Lazarus, D. M.; Lehrach, A.; Lorentz, B.; Messi, R.; Moricciani, D.; Morse, W. M.; Noid, G. A.; Onderwater, C. J. G.; Ozben, C. S.; Prasuhn, D.; Sandri, P. Levi; Semertzidis, Y. K.; da Silva e Silva, M.; Stephenson, E. J.; Stockhorst, H.; Venanzoni, G.; Versolato, O. O.

    2012-01-01

    This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY

  19. Resolution unfolding with limits imposed by statistical experimental errors

    International Nuclear Information System (INIS)

    Lang, D.W.

    1977-02-01

    A typical form of the resolution equation is derived by considering the physical measurement of an energy dependent spectrum. It is shown that the information contained in a data set may be expressed by writing the spectrum as a linear combination of a set of resolution functions. Introduction of other functions to describe the spectrum involves extra physical information. An iterative conjugate gradient technique to obtain a spectrum consistent with the data is described. At each iteration the residual discrepancy between the currently predicted yield and the measured data is used to generate the form and mangitude of the next term to be added to the spectrum. Other unfolding techniques are described and analysed, some faster than the conjugate gradient technique in special cases, but restricted in usefulness by implicit assumptions about the resolution functions. The nature of residual errors is considered. The variations of independently measured data sets are discussed, and hence, the variations of the sequence of terms appearing in a consequent conjugate gradient analysis. An approximate measure is obtained for the expected variation of independently obtained spectra. Refinements are briefly considered which apply to a resolution function that is not known precisely or which make use of a requirement that the spectrum be positive throughout its range. It is concluded that a conjugate gradient technique is best if sufficient computer facilities are available, and that, of the less demanding techniques, the best is one that is essentially a more slowly convergent version of a conjugate gradient method. (author)

  20. Medication errors in the Middle East countries: a systematic review of the literature.

    Science.gov (United States)

    Alsulami, Zayed; Conroy, Sharon; Choonara, Imti

    2013-04-01

    Medication errors are a significant global concern and can cause serious medical consequences for patients. Little is known about medication errors in Middle Eastern countries. The objectives of this systematic review were to review studies of the incidence and types of medication errors in Middle Eastern countries and to identify the main contributory factors involved. A systematic review of the literature related to medication errors in Middle Eastern countries was conducted in October 2011 using the following databases: Embase, Medline, Pubmed, the British Nursing Index and the Cumulative Index to Nursing & Allied Health Literature. The search strategy included all ages and languages. Inclusion criteria were that the studies assessed or discussed the incidence of medication errors and contributory factors to medication errors during the medication treatment process in adults or in children. Forty-five studies from 10 of the 15 Middle Eastern countries met the inclusion criteria. Nine (20 %) studies focused on medication errors in paediatric patients. Twenty-one focused on prescribing errors, 11 measured administration errors, 12 were interventional studies and one assessed transcribing errors. Dispensing and documentation errors were inadequately evaluated. Error rates varied from 7.1 % to 90.5 % for prescribing and from 9.4 % to 80 % for administration. The most common types of prescribing errors reported were incorrect dose (with an incidence rate from 0.15 % to 34.8 % of prescriptions), wrong frequency and wrong strength. Computerised physician rder entry and clinical pharmacist input were the main interventions evaluated. Poor knowledge of medicines was identified as a contributory factor for errors by both doctors (prescribers) and nurses (when administering drugs). Most studies did not assess the clinical severity of the medication errors. Studies related to medication errors in the Middle Eastern countries were relatively few in number and of poor quality

  1. Methods, analysis, and the treatment of systematic errors for the electron electric dipole moment search in thorium monoxide

    Science.gov (United States)

    Baron, J.; Campbell, W. C.; DeMille, D.; Doyle, J. M.; Gabrielse, G.; Gurevich, Y. V.; Hess, P. W.; Hutzler, N. R.; Kirilov, E.; Kozyryev, I.; O'Leary, B. R.; Panda, C. D.; Parsons, M. F.; Spaun, B.; Vutha, A. C.; West, A. D.; West, E. P.; ACME Collaboration

    2017-07-01

    We recently set a new limit on the electric dipole moment of the electron (eEDM) (J Baron et al and ACME collaboration 2014 Science 343 269-272), which represented an order-of-magnitude improvement on the previous limit and placed more stringent constraints on many charge-parity-violating extensions to the standard model. In this paper we discuss the measurement in detail. The experimental method and associated apparatus are described, together with the techniques used to isolate the eEDM signal. In particular, we detail the way experimental switches were used to suppress effects that can mimic the signal of interest. The methods used to search for systematic errors, and models explaining observed systematic errors, are also described. We briefly discuss possible improvements to the experiment.

  2. Systematic Error Study for ALICE charged-jet v2 Measurement

    Energy Technology Data Exchange (ETDEWEB)

    Heinz, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Soltz, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-07-18

    We study the treatment of systematic errors in the determination of v2 for charged jets in √ sNN = 2:76 TeV Pb-Pb collisions by the ALICE Collaboration. Working with the reported values and errors for the 0-5% centrality data we evaluate the Χ2 according to the formulas given for the statistical and systematic errors, where the latter are separated into correlated and shape contributions. We reproduce both the Χ2 and p-values relative to a null (zero) result. We then re-cast the systematic errors into an equivalent co-variance matrix and obtain identical results, demonstrating that the two methods are equivalent.

  3. Saccades to remembered target locations: an analysis of systematic and variable errors.

    Science.gov (United States)

    White, J M; Sparks, D L; Stanford, T R

    1994-01-01

    We studied the effects of varying delay interval on the accuracy and velocity of saccades to the remembered locations of visual targets. Remembered saccades were less accurate than control saccades. Both systematic and variable errors contributed to the loss of accuracy. Systematic errors were similar in size for delay intervals ranging from 400 msec to 5.6 sec, but variable errors increased monotonically as delay intervals were lengthened. Compared to control saccades, remembered saccades were slower and the peak velocities were more variable. However, neither peak velocity nor variability in peak velocity was related to the duration of the delay interval. Our findings indicate that a memory-related process is not the major source of the systematic errors observed on memory trials.

  4. SYSTEMATIC ERROR REDUCTION: NON-TILTED REFERENCE BEAM METHOD FOR LONG TRACE PROFILER

    International Nuclear Information System (INIS)

    QIAN, S.; QIAN, K.; HONG, Y.; SENG, L.; HO, T.; TAKACS, P.

    2007-01-01

    Systematic error in the Long Trace Profiler (LTP) has become the major error source as measurement accuracy enters the nanoradian and nanometer regime. Great efforts have been made to reduce the systematic error at a number of synchrotron radiation laboratories around the world. Generally, the LTP reference beam has to be tilted away from the optical axis in order to avoid fringe overlap between the sample and reference beams. However, a tilted reference beam will result in considerable systematic error due to optical system imperfections, which is difficult to correct. Six methods of implementing a non-tilted reference beam in the LTP are introduced: (1) application of an external precision angle device to measure and remove slide pitch error without a reference beam, (2) independent slide pitch test by use of not tilted reference beam, (3) non-tilted reference test combined with tilted sample, (4) penta-prism scanning mode without a reference beam correction, (5) non-tilted reference using a second optical head, and (6) alternate switching of data acquisition between the sample and reference beams. With a non-tilted reference method, the measurement accuracy can be improved significantly. Some measurement results are presented. Systematic error in the sample beam arm is not addressed in this paper and should be treated separately

  5. Impact of systematic errors on DVH parameters of different OAR and target volumes in Intracavitary Brachytherapy (ICBT)

    International Nuclear Information System (INIS)

    Mourya, Ankur; Singh, Gaganpreet; Kumar, Vivek; Oinam, Arun S.

    2016-01-01

    Aim of this study is to analyze the impact of systematic errors on DVH parameters of different OAR and Target volumes in intracavitary brachytherapy (ICBT). To quantify the changes in dose-volume histogram parameters due to systematic errors in applicator reconstruction of brachytherapy planning, known errors in catheter reconstructions have to be introduced in applicator coordinate system

  6. On systematic and statistic errors in radionuclide mass activity estimation procedure

    International Nuclear Information System (INIS)

    Smelcerovic, M.; Djuric, G.; Popovic, D.

    1989-01-01

    One of the most important requirements during nuclear accidents is the fast estimation of the mass activity of the radionuclides that suddenly and without control reach the environment. The paper points to systematic errors in the procedures of sampling, sample preparation and measurement itself, that in high degree contribute to total mass activity evaluation error. Statistic errors in gamma spectrometry as well as in total mass alpha and beta activity evaluation are also discussed. Beside, some of the possible sources of errors in the partial mass activity evaluation for some of the radionuclides are presented. The contribution of the errors in the total mass activity evaluation error is estimated and procedures that could possibly reduce it are discussed (author)

  7. SU-D-BRD-07: Evaluation of the Effectiveness of Statistical Process Control Methods to Detect Systematic Errors For Routine Electron Energy Verification

    International Nuclear Information System (INIS)

    Parker, S

    2015-01-01

    Purpose: To evaluate the ability of statistical process control methods to detect systematic errors when using a two dimensional (2D) detector array for routine electron beam energy verification. Methods: Electron beam energy constancy was measured using an aluminum wedge and a 2D diode array on four linear accelerators. Process control limits were established. Measurements were recorded in control charts and compared with both calculated process control limits and TG-142 recommended specification limits. The data was tested for normality, process capability and process acceptability. Additional measurements were recorded while systematic errors were intentionally introduced. Systematic errors included shifts in the alignment of the wedge, incorrect orientation of the wedge, and incorrect array calibration. Results: Control limits calculated for each beam were smaller than the recommended specification limits. Process capability and process acceptability ratios were greater than one in all cases. All data was normally distributed. Shifts in the alignment of the wedge were most apparent for low energies. The smallest shift (0.5 mm) was detectable using process control limits in some cases, while the largest shift (2 mm) was detectable using specification limits in only one case. The wedge orientation tested did not affect the measurements as this did not affect the thickness of aluminum over the detectors of interest. Array calibration dependence varied with energy and selected array calibration. 6 MeV was the least sensitive to array calibration selection while 16 MeV was the most sensitive. Conclusion: Statistical process control methods demonstrated that the data distribution was normally distributed, the process was capable of meeting specifications, and that the process was centered within the specification limits. Though not all systematic errors were distinguishable from random errors, process control limits increased the ability to detect systematic errors

  8. Human-simulation-based learning to prevent medication error: A systematic review.

    Science.gov (United States)

    Sarfati, Laura; Ranchon, Florence; Vantard, Nicolas; Schwiertz, Vérane; Larbre, Virginie; Parat, Stéphanie; Faudel, Amélie; Rioufol, Catherine

    2018-01-31

    In the past 2 decades, there has been an increasing interest in simulation-based learning programs to prevent medication error (ME). To improve knowledge, skills, and attitudes in prescribers, nurses, and pharmaceutical staff, these methods enable training without directly involving patients. However, best practices for simulation for healthcare providers are as yet undefined. By analysing the current state of experience in the field, the present review aims to assess whether human simulation in healthcare helps to reduce ME. A systematic review was conducted on Medline from 2000 to June 2015, associating the terms "Patient Simulation," "Medication Errors," and "Simulation Healthcare." Reports of technology-based simulation were excluded, to focus exclusively on human simulation in nontechnical skills learning. Twenty-one studies assessing simulation-based learning programs were selected, focusing on pharmacy, medicine or nursing students, or concerning programs aimed at reducing administration or preparation errors, managing crises, or learning communication skills for healthcare professionals. The studies varied in design, methodology, and assessment criteria. Few demonstrated that simulation was more effective than didactic learning in reducing ME. This review highlights a lack of long-term assessment and real-life extrapolation, with limited scenarios and participant samples. These various experiences, however, help in identifying the key elements required for an effective human simulation-based learning program for ME prevention: ie, scenario design, debriefing, and perception assessment. The performance of these programs depends on their ability to reflect reality and on professional guidance. Properly regulated simulation is a good way to train staff in events that happen only exceptionally, as well as in standard daily activities. By integrating human factors, simulation seems to be effective in preventing iatrogenic risk related to ME, if the program is

  9. Design of roundness measurement model with multi-systematic error for cylindrical components with large radius.

    Science.gov (United States)

    Sun, Chuanzhi; Wang, Lei; Tan, Jiubin; Zhao, Bo; Tang, Yangchao

    2016-02-01

    The paper designs a roundness measurement model with multi-systematic error, which takes eccentricity, probe offset, radius of tip head of probe, and tilt error into account for roundness measurement of cylindrical components. The effects of the systematic errors and radius of components are analysed in the roundness measurement. The proposed method is built on the instrument with a high precision rotating spindle. The effectiveness of the proposed method is verified by experiment with the standard cylindrical component, which is measured on a roundness measuring machine. Compared to the traditional limacon measurement model, the accuracy of roundness measurement can be increased by about 2.2 μm using the proposed roundness measurement model for the object with a large radius of around 37 mm. The proposed method can improve the accuracy of roundness measurement and can be used for error separation, calibration, and comparison, especially for cylindrical components with a large radius.

  10. Complete Systematic Error Model of SSR for Sensor Registration in ATC Surveillance Networks.

    Science.gov (United States)

    Jarama, Ángel J; López-Araquistain, Jaime; Miguel, Gonzalo de; Besada, Juan A

    2017-09-21

    In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases) is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature). It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation.

  11. Complete Systematic Error Model of SSR for Sensor Registration in ATC Surveillance Networks

    Directory of Open Access Journals (Sweden)

    Ángel J. Jarama

    2017-09-01

    Full Text Available In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature. It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation.

  12. Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation

    International Nuclear Information System (INIS)

    Helgesson, P.; Sjöstrand, H.; Koning, A.J.; Rydén, J.; Rochman, D.; Alhassan, E.; Pomp, S.

    2016-01-01

    In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also

  13. Unaccounted source of systematic errors in measurements of the Newtonian gravitational constant G

    Science.gov (United States)

    DeSalvo, Riccardo

    2015-06-01

    Many precision measurements of G have produced a spread of results incompatible with measurement errors. Clearly an unknown source of systematic errors is at work. It is proposed here that most of the discrepancies derive from subtle deviations from Hooke's law, caused by avalanches of entangled dislocations. The idea is supported by deviations from linearity reported by experimenters measuring G, similarly to what is observed, on a larger scale, in low-frequency spring oscillators. Some mitigating experimental apparatus modifications are suggested.

  14. Systematic errors of EIT systems determined by easily-scalable resistive phantoms.

    Science.gov (United States)

    Hahn, G; Just, A; Dittmar, J; Hellige, G

    2008-06-01

    We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design.

  15. Systematic errors of EIT systems determined by easily-scalable resistive phantoms

    International Nuclear Information System (INIS)

    Hahn, G; Just, A; Dittmar, J; Hellige, G

    2008-01-01

    We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design

  16. ac driving amplitude dependent systematic error in scanning Kelvin probe microscope measurements: Detection and correction

    International Nuclear Information System (INIS)

    Wu Yan; Shannon, Mark A.

    2006-01-01

    The dependence of the contact potential difference (CPD) reading on the ac driving amplitude in scanning Kelvin probe microscope (SKPM) hinders researchers from quantifying true material properties. We show theoretically and demonstrate experimentally that an ac driving amplitude dependence in the SKPM measurement can come from a systematic error, and it is common for all tip sample systems as long as there is a nonzero tracking error in the feedback control loop of the instrument. We further propose a methodology to detect and to correct the ac driving amplitude dependent systematic error in SKPM measurements. The true contact potential difference can be found by applying a linear regression to the measured CPD versus one over ac driving amplitude data. Two scenarios are studied: (a) when the surface being scanned by SKPM is not semiconducting and there is an ac driving amplitude dependent systematic error; (b) when a semiconductor surface is probed and asymmetric band bending occurs when the systematic error is present. Experiments are conducted using a commercial SKPM and CPD measurement results of two systems: platinum-iridium/gap/gold and platinum-iridium/gap/thermal oxide/silicon are discussed

  17. ERESYE - a expert system for the evaluation of uncertainties related to systematic experimental errors

    International Nuclear Information System (INIS)

    Martinelli, T.; Panini, G.C.; Amoroso, A.

    1989-11-01

    Information about systematic errors are not given In EXFOR, the data base of nuclear experimental measurements: their assessment is committed to the ability of the evaluator. A tool Is needed which performs this task in a fully automatic way or, at least, gives a valuable aid. The expert system ERESYE has been implemented for investigating the feasibility of an automatic evaluation of the systematic errors in the experiments. The features of the project which led to the implementation of the system are presented. (author)

  18. Coping with medical error: a systematic review of papers to assess the effects of involvement in medical errors on healthcare professionals' psychological well-being.

    Science.gov (United States)

    Sirriyeh, Reema; Lawton, Rebecca; Gardner, Peter; Armitage, Gerry

    2010-12-01

    Previous research has established health professionals as secondary victims of medical error, with the identification of a range of emotional and psychological repercussions that may occur as a result of involvement in error.2 3 Due to the vast range of emotional and psychological outcomes, research to date has been inconsistent in the variables measured and tools used. Therefore, differing conclusions have been drawn as to the nature of the impact of error on professionals and the subsequent repercussions for their team, patients and healthcare institution. A systematic review was conducted. Data sources were identified using database searches, with additional reference and hand searching. Eligibility criteria were applied to all studies identified, resulting in a total of 24 included studies. Quality assessment was conducted with the included studies using a tool that was developed as part of this research, but due to the limited number and diverse nature of studies, no exclusions were made on this basis. Review findings suggest that there is consistent evidence for the widespread impact of medical error on health professionals. Psychological repercussions may include negative states such as shame, self-doubt, anxiety and guilt. Despite much attention devoted to the assessment of negative outcomes, the potential for positive outcomes resulting from error also became apparent, with increased assertiveness, confidence and improved colleague relationships reported. It is evident that involvement in a medical error can elicit a significant psychological response from the health professional involved. However, a lack of literature around coping and support, coupled with inconsistencies and weaknesses in methodology, may need be addressed in future work.

  19. Effects of systematic phase errors on optimized quantum random-walk search algorithm

    International Nuclear Information System (INIS)

    Zhang Yu-Chao; Bao Wan-Su; Wang Xiang; Fu Xiang-Qun

    2015-01-01

    This study investigates the effects of systematic errors in phase inversions on the success rate and number of iterations in the optimized quantum random-walk search algorithm. Using the geometric description of this algorithm, a model of the algorithm with phase errors is established, and the relationship between the success rate of the algorithm, the database size, the number of iterations, and the phase error is determined. For a given database size, we obtain both the maximum success rate of the algorithm and the required number of iterations when phase errors are present in the algorithm. Analyses and numerical simulations show that the optimized quantum random-walk search algorithm is more robust against phase errors than Grover’s algorithm. (paper)

  20. Electronic portal image assisted reduction of systematic set-up errors in head and neck irradiation

    International Nuclear Information System (INIS)

    Boer, Hans C.J. de; Soernsen de Koste, John R. van; Creutzberg, Carien L.; Visser, Andries G.; Levendag, Peter C.; Heijmen, Ben J.M.

    2001-01-01

    Purpose: To quantify systematic and random patient set-up errors in head and neck irradiation and to investigate the impact of an off-line correction protocol on the systematic errors. Material and methods: Electronic portal images were obtained for 31 patients treated for primary supra-glottic larynx carcinoma who were immobilised using a polyvinyl chloride cast. The observed patient set-up errors were input to the shrinking action level (SAL) off-line decision protocol and appropriate set-up corrections were applied. To assess the impact of the protocol, the positioning accuracy without application of set-up corrections was reconstructed. Results: The set-up errors obtained without set-up corrections (1 standard deviation (SD)=1.5-2 mm for random and systematic errors) were comparable to those reported in other studies on similar fixation devices. On an average, six fractions per patient were imaged and the set-up of half the patients was changed due to the decision protocol. Most changes were detected during weekly check measurements, not during the first days of treatment. The application of the SAL protocol reduced the width of the distribution of systematic errors to 1 mm (1 SD), as expected from simulations. A retrospective analysis showed that this accuracy should be attainable with only two measurements per patient using a different off-line correction protocol, which does not apply action levels. Conclusions: Off-line verification protocols can be particularly effective in head and neck patients due to the smallness of the random set-up errors. The excellent set-up reproducibility that can be achieved with such protocols enables accurate dose delivery in conformal treatments

  1. Galaxy Cluster Shapes and Systematic Errors in H_0 as Determined by the Sunyaev-Zel'dovich Effect

    Science.gov (United States)

    Sulkanen, Martin E.; Patel, Sandeep K.

    1998-01-01

    Imaging of the Sunyaev-Zeldovich (SZ) effect in galaxy clusters combined with cluster plasma x-ray diagnostics promises to measure the cosmic distance scale to high accuracy. However, projecting the inverse-Compton scattering and x-ray emission along the cluster line-of-sight will introduce systematic error's in the Hubble constant, H_0, because the true shape of the cluster is not known. In this paper we present a study of the systematic errors in the value of H_0, as determined by the x-ray and SZ properties of theoretical samples of triaxial isothermal "beta-model" clusters, caused by projection effects and observer orientation relative to the model clusters' principal axes. We calculate three estimates for H_0 for each cluster, based on their large and small apparent angular core radii, and their arithmetic mean. We average the estimates for H_0 for a sample of 25 clusters and find that the estimates have limited systematic error: the 99.7% confidence intervals for the mean estimated H_0 analyzing the clusters using either their large or mean angular core r;dius are within 14% of the "true" (assumed) value of H_0 (and enclose it), for a triaxial beta model cluster sample possessing a distribution of apparent x-ray cluster ellipticities consistent with that of observed x-ray clusters.

  2. The systematic error of temperature noise correlation measurement method and self-calibration

    International Nuclear Information System (INIS)

    Tian Hong; Tong Yunxian

    1993-04-01

    The turbulent transport behavior of fluid noise and the nature of noise affect on the velocity measurement system have been studied. The systematic error of velocity measurement system is analyzed. A theoretical calibration method is proposed, which makes the velocity measurement of time-correlation as an absolute measurement method. The theoretical results are in good agreement with experiments

  3. End-point construction and systematic titration error in linear titration curves-complexation reactions

    NARCIS (Netherlands)

    Coenegracht, P.M.J.; Duisenberg, A.J.M.

    The systematic titration error which is introduced by the intersection of tangents to hyperbolic titration curves is discussed. The effects of the apparent (conditional) formation constant, of the concentration of the unknown component and of the ranges used for the end-point construction are

  4. On the effect of systematic errors in near real time accountancy

    International Nuclear Information System (INIS)

    Avenhaus, R.

    1987-01-01

    Systematic measurement errors have a decisive impact on nuclear materials accountancy. This has been demonstrated at various occasions for a fixed number of inventory periods, i.e. for situations where the overall probability of detection is taken as the measure of effectiveness. In the framework of Near Real Time Accountancy (NRTA), however, such analyses have not yet been performed. In this paper sequential test procedures are considered which are based on the so-called MUF-Residuals. It is shown that, if the decision maker does not know the systematic error variance, the average run lengths tend towards infinity if this variance is equal or longer than that of the random error. Furthermore, if the decision maker knows this invariance, the average run length for constant loss or diversion is not shorter than that without loss or diversion. These results cast some doubt on the present practice of data evaluation where systematic errors are tacitly assumed to persist for an infinite time. In fact, information about the time dependence of the variances of these errors has to be gathered in order that the efficiency of NRTA evaluation methods can be estimated realistically

  5. Systematic error in the precision measurement of the mean wavelength of a nearly monochromatic neutron beam due to geometric errors

    Energy Technology Data Exchange (ETDEWEB)

    Coakley, K.J., E-mail: kevin.coakley@nist.go [National Institute of Standards and Technology, 325 Broadway, Boulder, CO 80305 (United States); Dewey, M.S. [National Institute of Standards and Technology, Gaithersburg, MD (United States); Yue, A.T. [University of Tennessee, Knoxville, TN (United States); Laptev, A.B. [Tulane University, New Orleans, LA (United States)

    2009-12-11

    Many experiments at neutron scattering facilities require nearly monochromatic neutron beams. In such experiments, one must accurately measure the mean wavelength of the beam. We seek to reduce the systematic uncertainty of this measurement to approximately 0.1%. This work is motivated mainly by an effort to improve the measurement of the neutron lifetime determined from data collected in a 2003 in-beam experiment performed at NIST. More specifically, we seek to reduce systematic uncertainty by calibrating the neutron detector used in this lifetime experiment. This calibration requires simultaneous measurement of the responses of both the neutron detector used in the lifetime experiment and an absolute black neutron detector to a highly collimated nearly monochromatic beam of cold neutrons, as well as a separate measurement of the mean wavelength of the neutron beam. The calibration uncertainty will depend on the uncertainty of the measured efficiency of the black neutron detector and the uncertainty of the measured mean wavelength. The mean wavelength of the beam is measured by Bragg diffracting the beam from a nearly perfect silicon analyzer crystal. Given the rocking curve data and knowledge of the directions of the rocking axis and the normal to the scattering planes in the silicon crystal, one determines the mean wavelength of the beam. In practice, the direction of the rocking axis and the normal to the silicon scattering planes are not known exactly. Based on Monte Carlo simulation studies, we quantify systematic uncertainties in the mean wavelength measurement due to these geometric errors. Both theoretical and empirical results are presented and compared.

  6. Prevalence and reporting of recruitment, randomisation and treatment errors in clinical trials: A systematic review.

    Science.gov (United States)

    Yelland, Lisa N; Kahan, Brennan C; Dent, Elsa; Lee, Katherine J; Voysey, Merryn; Forbes, Andrew B; Cook, Jonathan A

    2018-06-01

    Background/aims In clinical trials, it is not unusual for errors to occur during the process of recruiting, randomising and providing treatment to participants. For example, an ineligible participant may inadvertently be randomised, a participant may be randomised in the incorrect stratum, a participant may be randomised multiple times when only a single randomisation is permitted or the incorrect treatment may inadvertently be issued to a participant at randomisation. Such errors have the potential to introduce bias into treatment effect estimates and affect the validity of the trial, yet there is little motivation for researchers to report these errors and it is unclear how often they occur. The aim of this study is to assess the prevalence of recruitment, randomisation and treatment errors and review current approaches for reporting these errors in trials published in leading medical journals. Methods We conducted a systematic review of individually randomised, phase III, randomised controlled trials published in New England Journal of Medicine, Lancet, Journal of the American Medical Association, Annals of Internal Medicine and British Medical Journal from January to March 2015. The number and type of recruitment, randomisation and treatment errors that were reported and how they were handled were recorded. The corresponding authors were contacted for a random sample of trials included in the review and asked to provide details on unreported errors that occurred during their trial. Results We identified 241 potentially eligible articles, of which 82 met the inclusion criteria and were included in the review. These trials involved a median of 24 centres and 650 participants, and 87% involved two treatment arms. Recruitment, randomisation or treatment errors were reported in 32 in 82 trials (39%) that had a median of eight errors. The most commonly reported error was ineligible participants inadvertently being randomised. No mention of recruitment, randomisation

  7. Characterization of electromagnetic fields in the aSPECT spectrometer and reduction of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Ayala Guardia, Fidel

    2011-10-15

    The aSPECT spectrometer has been designed to measure, with high precision, the recoil proton spectrum of the free neutron decay. From this spectrum, the electron antineutrino angular correlation coefficient a can be extracted with high accuracy. The goal of the experiment is to determine the coefficient a with a total relative error smaller than 0.3%, well below the current literature value of 5%. First measurements with the aSPECT spectrometer were performed in the Forschungs-Neutronenquelle Heinz Maier-Leibnitz in Munich. However, time-dependent background instabilities prevented us from reporting a new value of a. The contents of this thesis are based on the latest measurements performed with the aSPECT spectrometer at the Institut Laue-Langevin (ILL) in Grenoble, France. In these measurements, background instabilities were considerably reduced. Furthermore, diverse modifications intended to minimize systematic errors and to achieve a more reliable setup were successfully performed. Unfortunately, saturation effects of the detector electronics turned out to be too high to determine a meaningful result. However, this and other systematics were identified and decreased, or even eliminated, for future aSPECT beamtimes. The central part of this work is focused on the analysis and improvement of systematic errors related to the aSPECT electromagnetic fields. This work yielded in many improvements, particularly in the reduction of the systematic effects due to electric fields. The systematics related to the aSPECT magnetic field were also minimized and determined down to a level which permits to improve the present literature value of a. Furthermore, a custom NMR-magnetometer was developed and improved during this thesis, which will lead to reduction of magnetic field-related uncertainties down to a negligible level to determine a with a total relative error of at least 0.3%.

  8. Characterization of electromagnetic fields in the αSPECTspectrometer and reduction of systematic errors

    International Nuclear Information System (INIS)

    Ayala Guardia, Fidel

    2011-10-01

    The aSPECT spectrometer has been designed to measure, with high precision, the recoil proton spectrum of the free neutron decay. From this spectrum, the electron antineutrino angular correlation coefficient a can be extracted with high accuracy. The goal of the experiment is to determine the coefficient a with a total relative error smaller than 0.3%, well below the current literature value of 5%. First measurements with the aSPECT spectrometer were performed in the Forschungs-Neutronenquelle Heinz Maier-Leibnitz in Munich. However, time-dependent background instabilities prevented us from reporting a new value of a. The contents of this thesis are based on the latest measurements performed with the aSPECT spectrometer at the Institut Laue-Langevin (ILL) in Grenoble, France. In these measurements, background instabilities were considerably reduced. Furthermore, diverse modifications intended to minimize systematic errors and to achieve a more reliable setup were successfully performed. Unfortunately, saturation effects of the detector electronics turned out to be too high to determine a meaningful result. However, this and other systematics were identified and decreased, or even eliminated, for future aSPECT beamtimes. The central part of this work is focused on the analysis and improvement of systematic errors related to the aSPECT electromagnetic fields. This work yielded in many improvements, particularly in the reduction of the systematic effects due to electric fields. The systematics related to the aSPECT magnetic field were also minimized and determined down to a level which permits to improve the present literature value of a. Furthermore, a custom NMR-magnetometer was developed and improved during this thesis, which will lead to reduction of magnetic field-related uncertainties down to a negligible level to determine a with a total relative error of at least 0.3%.

  9. Limit of detection in the presence of instrumental and non-instrumental errors: study of the possible sources of error and application to the analysis of 41 elements at trace levels by inductively coupled plasma-mass spectrometry technique

    International Nuclear Information System (INIS)

    Badocco, Denis; Lavagnini, Irma; Mondin, Andrea; Tapparo, Andrea; Pastore, Paolo

    2015-01-01

    In this paper the detection limit was estimated when signals were affected by two error contributions, namely instrumental errors and operational-non-instrumental errors. The detection limit was theoretically obtained following the hypothesis testing schema implemented with the calibration curve methodology. The experimental calibration design was based on J standards measured I times with non-instrumental errors affecting each standard systematically but randomly among the J levels. A two-component variance regression was performed to determine the calibration curve and to define the detection limit in these conditions. The detection limit values obtained from the calibration at trace levels of 41 elements by ICP-MS resulted larger than those obtainable from a one component variance regression. The role of the reagent impurities on the instrumental errors was ascertained and taken into account. Environmental pollution was studied as source of non-instrumental errors. The environmental pollution role was evaluated by Principal Component Analysis technique (PCA) applied to a series of nine calibrations performed in fourteen months. The influence of the seasonality of the environmental pollution on the detection limit was evidenced for many elements usually present in the urban air particulate. The obtained results clearly indicated the need of using the two-component variance regression approach for the calibration of all the elements usually present in the environment at significant concentration levels. - Highlights: • Limit of detection was obtained considering a two variance component regression. • Calibration data may be affected by instrumental and operational conditions errors. • Calibration model was applied to determine 41 elements at trace level by ICP-MS. • Non instrumental errors were evidenced by PCA analysis

  10. Uncertainty quantification and error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  11. Insights on the impact of systematic model errors on data assimilation performance in changing catchments

    Science.gov (United States)

    Pathiraja, S.; Anghileri, D.; Burlando, P.; Sharma, A.; Marshall, L.; Moradkhani, H.

    2018-03-01

    The global prevalence of rapid and extensive land use change necessitates hydrologic modelling methodologies capable of handling non-stationarity. This is particularly true in the context of Hydrologic Forecasting using Data Assimilation. Data Assimilation has been shown to dramatically improve forecast skill in hydrologic and meteorological applications, although such improvements are conditional on using bias-free observations and model simulations. A hydrologic model calibrated to a particular set of land cover conditions has the potential to produce biased simulations when the catchment is disturbed. This paper sheds new light on the impacts of bias or systematic errors in hydrologic data assimilation, in the context of forecasting in catchments with changing land surface conditions and a model calibrated to pre-change conditions. We posit that in such cases, the impact of systematic model errors on assimilation or forecast quality is dependent on the inherent prediction uncertainty that persists even in pre-change conditions. Through experiments on a range of catchments, we develop a conceptual relationship between total prediction uncertainty and the impacts of land cover changes on the hydrologic regime to demonstrate how forecast quality is affected when using state estimation Data Assimilation with no modifications to account for land cover changes. This work shows that systematic model errors as a result of changing or changed catchment conditions do not always necessitate adjustments to the modelling or assimilation methodology, for instance through re-calibration of the hydrologic model, time varying model parameters or revised offline/online bias estimation.

  12. Investigating Systematic Errors of the Interstellar Flow Longitude Derived from the Pickup Ion Cutoff

    Science.gov (United States)

    Taut, A.; Berger, L.; Drews, C.; Bower, J.; Keilbach, D.; Lee, M. A.; Moebius, E.; Wimmer-Schweingruber, R. F.

    2017-12-01

    Complementary to the direct neutral particle measurements performed by e.g. IBEX, the measurement of PickUp Ions (PUIs) constitutes a diagnostic tool to investigate the local interstellar medium. PUIs are former neutral particles that have been ionized in the inner heliosphere. Subsequently, they are picked up by the solar wind and its frozen-in magnetic field. Due to this process, a characteristic Velocity Distribution Function (VDF) with a sharp cutoff evolves, which carries information about the PUI's injection speed and thus the former neutral particle velocity. The symmetry of the injection speed about the interstellar flow vector is used to derive the interstellar flow longitude from PUI measurements. Using He PUI data obtained by the PLASTIC sensor on STEREO A, we investigate how this concept may be affected by systematic errors. The PUI VDF strongly depends on the orientation of the local interplanetary magnetic field. Recently injected PUIs with speeds just below the cutoff speed typically form a highly anisotropic torus distribution in velocity space, which leads to a longitudinal transport for certain magnetic field orientation. Therefore, we investigate how the selection of magnetic field configurations in the data affects the result for the interstellar flow longitude that we derive from the PUI cutoff. Indeed, we find that the results follow a systematic trend with the filtered magnetic field angles that can lead to a shift of the result up to 5°. In turn, this means that every value for the interstellar flow longitude derived from the PUI cutoff is affected by a systematic error depending on the utilized magnetic field orientations. Here, we present our observations, discuss possible reasons for the systematic trend we discovered, and indicate selections that may minimize the systematic errors.

  13. Refractive error assessment: influence of different optical elements and current limits of biometric techniques.

    Science.gov (United States)

    Ribeiro, Filomena; Castanheira-Dinis, Antonio; Dias, Joao Mendanha

    2013-03-01

    To identify and quantify sources of error on refractive assessment using exact ray tracing. The Liou-Brennan eye model was used as a starting point and its parameters were varied individually within a physiological range. The contribution of each parameter to refractive error was assessed using linear regression curve fits and Gaussian error propagation analysis. A MonteCarlo analysis quantified the limits of refractive assessment given by current biometric measurements. Vitreous and aqueous refractive indices are the elements that influence refractive error the most, with a 1% change of each parameter contributing to a refractive error variation of +1.60 and -1.30 diopters (D), respectively. In the phakic eye, axial length measurements taken by ultrasound (vitreous chamber depth, lens thickness, and anterior chamber depth [ACD]) were the most sensitive to biometric errors, with a contribution to the refractive error of 62.7%, 14.2%, and 10.7%, respectively. In the pseudophakic eye, vitreous chamber depth showed the highest contribution at 53.7%, followed by postoperative ACD at 35.7%. When optic measurements were considered, postoperative ACD was the most important contributor, followed by anterior corneal surface and its asphericity. A MonteCarlo simulation showed that current limits of refractive assessment are 0.26 and 0.28 D for the phakic and pseudophakic eye, respectively. The most relevant optical elements either do not have available measurement instruments or the existing instruments still need to improve their accuracy. Ray tracing can be used as an optical assessment technique, and may be the correct path for future personalized refractive assessment. Copyright 2013, SLACK Incorporated.

  14. Achieving the Heisenberg limit in quantum metrology using quantum error correction.

    Science.gov (United States)

    Zhou, Sisi; Zhang, Mengzhen; Preskill, John; Jiang, Liang

    2018-01-08

    Quantum metrology has many important applications in science and technology, ranging from frequency spectroscopy to gravitational wave detection. Quantum mechanics imposes a fundamental limit on measurement precision, called the Heisenberg limit, which can be achieved for noiseless quantum systems, but is not achievable in general for systems subject to noise. Here we study how measurement precision can be enhanced through quantum error correction, a general method for protecting a quantum system from the damaging effects of noise. We find a necessary and sufficient condition for achieving the Heisenberg limit using quantum probes subject to Markovian noise, assuming that noiseless ancilla systems are available, and that fast, accurate quantum processing can be performed. When the sufficient condition is satisfied, a quantum error-correcting code can be constructed that suppresses the noise without obscuring the signal; the optimal code, achieving the best possible precision, can be found by solving a semidefinite program.

  15. The sensitivity of patient specific IMRT QC to systematic MLC leaf bank offset errors

    International Nuclear Information System (INIS)

    Rangel, Alejandra; Palte, Gesa; Dunscombe, Peter

    2010-01-01

    Purpose: Patient specific IMRT QC is performed routinely in many clinics as a safeguard against errors and inaccuracies which may be introduced during the complex planning, data transfer, and delivery phases of this type of treatment. The purpose of this work is to evaluate the feasibility of detecting systematic errors in MLC leaf bank position with patient specific checks. Methods: 9 head and neck (H and N) and 14 prostate IMRT beams were delivered using MLC files containing systematic offsets (±1 mm in two banks, ±0.5 mm in two banks, and 1 mm in one bank of leaves). The beams were measured using both MAPCHECK (Sun Nuclear Corp., Melbourne, FL) and the aS1000 electronic portal imaging device (Varian Medical Systems, Palo Alto, CA). Comparisons with calculated fields, without offsets, were made using commonly adopted criteria including absolute dose (AD) difference, relative dose difference, distance to agreement (DTA), and the gamma index. Results: The criteria most sensitive to systematic leaf bank offsets were the 3% AD, 3 mm DTA for MAPCHECK and the gamma index with 2% AD and 2 mm DTA for the EPID. The criterion based on the relative dose measurements was the least sensitive to MLC offsets. More highly modulated fields, i.e., H and N, showed greater changes in the percentage of passing points due to systematic MLC inaccuracy than prostate fields. Conclusions: None of the techniques or criteria tested is sufficiently sensitive, with the population of IMRT fields, to detect a systematic MLC offset at a clinically significant level on an individual field. Patient specific QC cannot, therefore, substitute for routine QC of the MLC itself.

  16. The sensitivity of patient specific IMRT QC to systematic MLC leaf bank offset errors

    Energy Technology Data Exchange (ETDEWEB)

    Rangel, Alejandra; Palte, Gesa; Dunscombe, Peter [Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2, Canada and Department of Physics and Astronomy, University of Calgary, 2500 University Drive North West, Calgary, Alberta T2N 1N4 (Canada); Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada); Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada); Department of Physics and Astronomy, University of Calgary, 2500 University Drive NW, Calgary, Alberta T2N 1N4 (Canada) and Department of Oncology, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada)

    2010-07-15

    Purpose: Patient specific IMRT QC is performed routinely in many clinics as a safeguard against errors and inaccuracies which may be introduced during the complex planning, data transfer, and delivery phases of this type of treatment. The purpose of this work is to evaluate the feasibility of detecting systematic errors in MLC leaf bank position with patient specific checks. Methods: 9 head and neck (H and N) and 14 prostate IMRT beams were delivered using MLC files containing systematic offsets ({+-}1 mm in two banks, {+-}0.5 mm in two banks, and 1 mm in one bank of leaves). The beams were measured using both MAPCHECK (Sun Nuclear Corp., Melbourne, FL) and the aS1000 electronic portal imaging device (Varian Medical Systems, Palo Alto, CA). Comparisons with calculated fields, without offsets, were made using commonly adopted criteria including absolute dose (AD) difference, relative dose difference, distance to agreement (DTA), and the gamma index. Results: The criteria most sensitive to systematic leaf bank offsets were the 3% AD, 3 mm DTA for MAPCHECK and the gamma index with 2% AD and 2 mm DTA for the EPID. The criterion based on the relative dose measurements was the least sensitive to MLC offsets. More highly modulated fields, i.e., H and N, showed greater changes in the percentage of passing points due to systematic MLC inaccuracy than prostate fields. Conclusions: None of the techniques or criteria tested is sufficiently sensitive, with the population of IMRT fields, to detect a systematic MLC offset at a clinically significant level on an individual field. Patient specific QC cannot, therefore, substitute for routine QC of the MLC itself.

  17. 'When measurements mean action' decision models for portal image review to eliminate systematic set-up errors

    International Nuclear Information System (INIS)

    Wratten, C.R.; Denham, J.W.; O; Brien, P.; Hamilton, C.S.; Kron, T.; London Regional Cancer Centre, London, Ontario

    2004-01-01

    The aim of the present paper is to evaluate how the use of decision models in the review of portal images can eliminate systematic set-up errors during conformal therapy. Sixteen patients undergoing four-field irradiation of prostate cancer have had daily portal images obtained during the first two treatment weeks and weekly thereafter. The magnitude of random and systematic variations has been calculated by comparison of the portal image with the reference simulator images using the two-dimensional decision model embodied in the Hotelling's evaluation process (HEP). Random day-to-day set-up variation was small in this group of patients. Systematic errors were, however, common. In 15 of 16 patients, one or more errors of >2 mm were diagnosed at some stage during treatment. Sixteen of the 23 errors were between 2 and 4 mm. Although there were examples of oversensitivity of the HEP in three cases, and one instance of undersensitivity, the HEP proved highly sensitive to the small (2-4 mm) systematic errors that must be eliminated during high precision radiotherapy. The HEP has proven valuable in diagnosing very small ( 4 mm) systematic errors using one-dimensional decision models, HEP can eliminate the majority of systematic errors during the first 2 treatment weeks. Copyright (2004) Blackwell Science Pty Ltd

  18. Adverse Drug Events and Medication Errors in African Hospitals: A Systematic Review.

    Science.gov (United States)

    Mekonnen, Alemayehu B; Alhawassi, Tariq M; McLachlan, Andrew J; Brien, Jo-Anne E

    2018-03-01

    Medication errors and adverse drug events are universal problems contributing to patient harm but the magnitude of these problems in Africa remains unclear. The objective of this study was to systematically investigate the literature on the extent of medication errors and adverse drug events, and the factors contributing to medication errors in African hospitals. We searched PubMed, MEDLINE, EMBASE, Web of Science and Global Health databases from inception to 31 August, 2017 and hand searched the reference lists of included studies. Original research studies of any design published in English that investigated adverse drug events and/or medication errors in any patient population in the hospital setting in Africa were included. Descriptive statistics including median and interquartile range were presented. Fifty-one studies were included; of these, 33 focused on medication errors, 15 on adverse drug events, and three studies focused on medication errors and adverse drug events. These studies were conducted in nine (of the 54) African countries. In any patient population, the median (interquartile range) percentage of patients reported to have experienced any suspected adverse drug event at hospital admission was 8.4% (4.5-20.1%), while adverse drug events causing admission were reported in 2.8% (0.7-6.4%) of patients but it was reported that a median of 43.5% (20.0-47.0%) of the adverse drug events were deemed preventable. Similarly, the median mortality rate attributed to adverse drug events was reported to be 0.1% (interquartile range 0.0-0.3%). The most commonly reported types of medication errors were prescribing errors, occurring in a median of 57.4% (interquartile range 22.8-72.8%) of all prescriptions and a median of 15.5% (interquartile range 7.5-50.6%) of the prescriptions evaluated had dosing problems. Major contributing factors for medication errors reported in these studies were individual practitioner factors (e.g. fatigue and inadequate knowledge

  19. On the effects of systematic errors in analysis of nuclear scattering data

    International Nuclear Information System (INIS)

    Bennett, M.T.; Steward, C.; Amos, K.; Allen, L.J.

    1995-01-01

    The effects of systematic errors on elastic scattering differential cross-section data upon the assessment of quality fits to that data have been studied. Three cases are studied, namely the differential cross-section data sets from elastic scattering of 200 MeV protons from 12 C, of 350 MeV 16 O- 16 O scattering and of 288.6 MeV 12 C- 12 C scattering. First, to estimate the probability of any unknown systematic errors, select sets of data have been processed using the method of generalized cross validation; a method based upon the premise that any data set should satisfy an optimal smoothness criterion. In another case, the S function that provided a statistically significant fit to data, upon allowance for angle variation, became overdetermined. A far simpler S function form could then be found to describe the scattering process. The S functions so obtained have been used in a fixed energy inverse scattering study to specify effective, local, Schroedinger potentials for the collisions. An error analysis has been performed on the results to specify confidence levels for those interactions. 19 refs., 6 tabs., 15 figs

  20. Unaccounted source of systematic errors in measurements of the Newtonian gravitational constant G

    International Nuclear Information System (INIS)

    DeSalvo, Riccardo

    2015-01-01

    Many precision measurements of G have produced a spread of results incompatible with measurement errors. Clearly an unknown source of systematic errors is at work. It is proposed here that most of the discrepancies derive from subtle deviations from Hooke's law, caused by avalanches of entangled dislocations. The idea is supported by deviations from linearity reported by experimenters measuring G, similarly to what is observed, on a larger scale, in low-frequency spring oscillators. Some mitigating experimental apparatus modifications are suggested. - Highlights: • Source of discrepancies on universal gravitational constant G measurements. • Collective motion of dislocations results in breakdown of Hook's law. • Self-organized criticality produce non-predictive shifts of equilibrium point. • New dissipation mechanism different from loss angle and viscous models is necessary. • Mitigation measures proposed may bring coherence to the measurements of G

  1. Reliability and Measurement Error of Tensiomyography to Assess Mechanical Muscle Function: A Systematic Review.

    Science.gov (United States)

    Martín-Rodríguez, Saúl; Loturco, Irineu; Hunter, Angus M; Rodríguez-Ruiz, David; Munguia-Izquierdo, Diego

    2017-12-01

    Martín-Rodríguez, S, Loturco, I, Hunter, AM, Rodríguez-Ruiz, D, and Munguia-Izquierdo, D. Reliability and measurement error of tensiomyography to assess mechanical muscle function: A systematic review. J Strength Cond Res 31(12): 3524-3536, 2017-Interest in studying mechanical skeletal muscle function through tensiomyography (TMG) has increased in recent years. This systematic review aimed to (a) report the reliability and measurement error of all TMG parameters (i.e., maximum radial displacement of the muscle belly [Dm], contraction time [Tc], delay time [Td], half-relaxation time [½ Tr], and sustained contraction time [Ts]) and (b) to provide critical reflection on how to perform accurate and appropriate measurements for informing clinicians, exercise professionals, and researchers. A comprehensive literature search was performed of the Pubmed, Scopus, Science Direct, and Cochrane databases up to July 2017. Eight studies were included in this systematic review. Meta-analysis could not be performed because of the low quality of the evidence of some studies evaluated. Overall, the review of the 9 studies involving 158 participants revealed high relative reliability (intraclass correlation coefficient [ICC]) for Dm (0.91-0.99); moderate-to-high ICC for Ts (0.80-0.96), Tc (0.70-0.98), and ½ Tr (0.77-0.93); and low-to-high ICC for Td (0.60-0.98), independently of the evaluated muscles. In addition, absolute reliability (coefficient of variation [CV]) was low for all TMG parameters except for ½ Tr (CV = >20%), whereas measurement error indexes were high for this parameter. In conclusion, this study indicates that 3 of the TMG parameters (Dm, Td, and Tc) are highly reliable, whereas ½ Tr demonstrate insufficient reliability, and thus should not be used in future studies.

  2. Comparison of two stochastic techniques for reliable urban runoff prediction by modeling systematic errors

    DEFF Research Database (Denmark)

    Del Giudice, Dario; Löwe, Roland; Madsen, Henrik

    2015-01-01

    from different fields and have not yet been compared in environmental modeling. To compare the two approaches, we develop a unifying terminology, evaluate them theoretically, and apply them to conceptual rainfall-runoff modeling in the same drainage system. Our results show that both approaches can......In urban rainfall-runoff, commonly applied statistical techniques for uncertainty quantification mostly ignore systematic output errors originating from simplified models and erroneous inputs. Consequently, the resulting predictive uncertainty is often unreliable. Our objective is to present two...... approaches which use stochastic processes to describe systematic deviations and to discuss their advantages and drawbacks for urban drainage modeling. The two methodologies are an external bias description (EBD) and an internal noise description (IND, also known as stochastic gray-box modeling). They emerge...

  3. Noncontact thermometry via laser pumped, thermographic phosphors: Characterization of systematic errors and industrial applications

    International Nuclear Information System (INIS)

    Gillies, G.T.; Dowell, L.J.; Lutz, W.N.; Allison, S.W.; Cates, M.R.; Noel, B.W.; Franks, L.A.; Borella, H.M.

    1987-10-01

    There are a growing number of industrial measurement situations that call for a high precision, noncontact method of thermometry. Our collaboration has been successful in developing one such method based on the laser-induced fluorescence of rare-earth-doped ceramic phosphors like Y 2 O 3 :Eu. In this paper, we summarize the results of characterization studies aimed at identifying the sources of systematic error in a laboratory-grade version of the method. We then go on to present data from measurements made in the afterburner plume of a jet turbine and inside an operating permanent magnet motor. 12 refs., 6 figs

  4. Carers' Medication Administration Errors in the Domiciliary Setting: A Systematic Review.

    Directory of Open Access Journals (Sweden)

    Anam Parand

    Full Text Available Medications are mostly taken in patients' own homes, increasingly administered by carers, yet studies of medication safety have been largely conducted in the hospital setting. We aimed to review studies of how carers cause and/or prevent medication administration errors (MAEs within the patient's home; to identify types, prevalence and causes of these MAEs and any interventions to prevent them.A narrative systematic review of literature published between 1 Jan 1946 and 23 Sep 2013 was carried out across the databases EMBASE, MEDLINE, PSYCHINFO, COCHRANE and CINAHL. Empirical studies were included where carers were responsible for preventing/causing MAEs in the home and standardised tools used for data extraction and quality assessment.Thirty-six papers met the criteria for narrative review, 33 of which included parents caring for children, two predominantly comprised adult children and spouses caring for older parents/partners, and one focused on paid carers mostly looking after older adults. The carer administration error rate ranged from 1.9 to 33% of medications administered and from 12 to 92.7% of carers administering medication. These included dosage errors, omitted administration, wrong medication and wrong time or route of administration. Contributory factors included individual carer factors (e.g. carer age, environmental factors (e.g. storage, medication factors (e.g. number of medicines, prescription communication factors (e.g. comprehensibility of instructions, psychosocial factors (e.g. carer-to-carer communication, and care-recipient factors (e.g. recipient age. The few interventions effective in preventing MAEs involved carer training and tailored equipment.This review shows that home medication administration errors made by carers are a potentially serious patient safety issue. Carers made similar errors to those made by professionals in other contexts and a wide variety of contributory factors were identified. The home care

  5. Design of a real-time spectroscopic rotating compensator ellipsometer without systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Broch, Laurent, E-mail: laurent.broch@univ-lorraine.fr [Laboratoire de Chimie Physique-Approche Multi-echelle des Milieux Complexes (LCP-A2MC, EA 4632), Universite de Lorraine, 1 boulevard Arago CP 87811, F-57078 Metz Cedex 3 (France); Stein, Nicolas [Institut Jean Lamour, Universite de Lorraine, UMR 7198 CNRS, 1 boulevard Arago CP 87811, F-57078 Metz Cedex 3 (France); Zimmer, Alexandre [Laboratoire Interdisciplinaire Carnot de Bourgogne, UMR 6303 CNRS, Universite de Bourgogne, 9 avenue Alain Savary BP 47870, F-21078 Dijon Cedex (France); Battie, Yann; Naciri, Aotmane En [Laboratoire de Chimie Physique-Approche Multi-echelle des Milieux Complexes (LCP-A2MC, EA 4632), Universite de Lorraine, 1 boulevard Arago CP 87811, F-57078 Metz Cedex 3 (France)

    2014-11-28

    We describe a spectroscopic ellipsometer in the visible domain (400–800 nm) based on a rotating compensator technology using two detectors. The classical analyzer is replaced by a fixed Rochon birefringent beamsplitter which splits the incidence light wave into two perpendicularly polarized waves, one oriented at + 45° and the other one at − 45° according to the plane of incidence. Both emergent optical signals are analyzed by two identical CCD detectors which are synchronized by an optical encoder fixed on the shaft of the step-by-step motor of the compensator. The final spectrum is the result of the two averaged Ψ and Δ spectra acquired by both detectors. We show that Ψ and Δ spectra are acquired without systematic errors on a spectral range fixed from 400 to 800 nm. The acquisition time can be adjusted down to 25 ms. The setup was validated by monitoring the first steps of bismuth telluride film electrocrystallization. The results exhibit that induced experimental growth parameters, such as film thickness and volumic fraction of deposited material can be extracted with a better trueness. - Highlights: • High-speed rotating compensator ellipsometer equipped with 2 detectors. • Ellipsometric angles without systematic errors • In-situ monitoring of electrocrystallization of bismuth telluride thin layer • High-accuracy of fitted physical parameters.

  6. Systematic errors due to linear congruential random-number generators with the Swendsen-Wang algorithm: a warning.

    Science.gov (United States)

    Ossola, Giovanni; Sokal, Alan D

    2004-08-01

    We show that linear congruential pseudo-random-number generators can cause systematic errors in Monte Carlo simulations using the Swendsen-Wang algorithm, if the lattice size is a multiple of a very large power of 2 and one random number is used per bond. These systematic errors arise from correlations within a single bond-update half-sweep. The errors can be eliminated (or at least radically reduced) by updating the bonds in a random order or in an aperiodic manner. It also helps to use a generator of large modulus (e.g., 60 or more bits).

  7. Reliability analysis - systematic approach based on limited data

    International Nuclear Information System (INIS)

    Bourne, A.J.

    1975-11-01

    The initial approaches required for reliability analysis are outlined. These approaches highlight the system boundaries, examine the conditions under which the system is required to operate, and define the overall performance requirements. The discussion is illustrated by a simple example of an automatic protective system for a nuclear reactor. It is then shown how the initial approach leads to a method of defining the system, establishing performance parameters of interest and determining the general form of reliability models to be used. The overall system model and the availability of reliability data at the system level are next examined. An iterative process is then described whereby the reliability model and data requirements are systematically refined at progressively lower hierarchic levels of the system. At each stage, the approach is illustrated with examples from the protective system previously described. The main advantages of the approach put forward are the systematic process of analysis, the concentration of assessment effort in the critical areas and the maximum use of limited reliability data. (author)

  8. 5 CFR 1605.22 - Claims for correction of Board or TSP record keeper errors; time limitations.

    Science.gov (United States)

    2010-01-01

    ... record keeper errors; time limitations. 1605.22 Section 1605.22 Administrative Personnel FEDERAL... § 1605.22 Claims for correction of Board or TSP record keeper errors; time limitations. (a) Filing claims... after that time, the Board or TSP record keeper may use its sound discretion in deciding whether to...

  9. An Examination of the Spatial Distribution of Carbon Dioxide and Systematic Errors

    Science.gov (United States)

    Coffey, Brennan; Gunson, Mike; Frankenberg, Christian; Osterman, Greg

    2011-01-01

    The industrial period and modern age is characterized by combustion of coal, oil, and natural gas for primary energy and transportation leading to rising levels of atmospheric of CO2. This increase, which is being carefully measured, has ramifications throughout the biological world. Through remote sensing, it is possible to measure how many molecules of CO2 lie in a defined column of air. However, other gases and particles are present in the atmosphere, such as aerosols and water, which make such measurements more complicated1. Understanding the detailed geometry and path length of the observation is vital to computing the concentration of CO2. Comparing these satellite readings with ground-truth data (TCCON) the systematic errors arising from these sources can be assessed. Once the error is understood, it can be scaled for in the retrieval algorithms to create a set of data, which is closer to the TCCON measurements1. Using this process, the algorithms are being developed to reduce bias, within.1% worldwide of the true value. At this stage, the accuracy is within 1%, but through correcting small errors contained in the algorithms, such as accounting for the scattering of sunlight, the desired accuracy can be achieved.

  10. Avoiding Systematic Errors in Isometric Squat-Related Studies without Pre-Familiarization by Using Sufficient Numbers of Trials

    Directory of Open Access Journals (Sweden)

    Pekünlü Ekim

    2014-10-01

    Full Text Available There is no scientific evidence in the literature indicating that maximal isometric strength measures can be assessed within 3 trials. We questioned whether the results of isometric squat-related studies in which maximal isometric squat strength (MISS testing was performed using limited numbers of trials without pre-familiarization might have included systematic errors, especially those resulting from acute learning effects. Forty resistance-trained male participants performed 8 isometric squat trials without pre-familiarization. The highest measures in the first “n” trials (3 ≤ n ≤ 8 of these 8 squats were regarded as MISS obtained using 6 different MISS test methods featuring different numbers of trials (The Best of n Trials Method [BnT]. When B3T and B8T were paired with other methods, high reliability was found between the paired methods in terms of intraclass correlation coefficients (0.93-0.98 and coefficients of variation (3.4-7.0%. The Wilcoxon’s signed rank test indicated that MISS obtained using B3T and B8T were lower (p < 0.001 and higher (p < 0.001, respectively, than those obtained using other methods. The Bland- Altman method revealed a lack of agreement between any of the paired methods. Simulation studies illustrated that increasing the number of trials to 9-10 using a relatively large sample size (i.e., ≥ 24 could be an effective means of obtaining the actual MISS values of the participants. The common use of a limited number of trials in MISS tests without pre-familiarization appears to have no solid scientific base. Our findings suggest that the number of trials should be increased in commonly used MISS tests to avoid learning effect-related systematic errors

  11. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-Area Sky Surveys

    Energy Technology Data Exchange (ETDEWEB)

    Li, T. S. [et al.

    2016-05-27

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is stable in time and uniform over the sky to 1% precision or better. Past surveys have achieved photometric precision of 1-2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors using photometry from the Dark Energy Survey (DES) as an example. We define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes, when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the systematic chromatic errors caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane, can be up to 2% in some bandpasses. We compare the calculated systematic chromatic errors with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput. The residual after correction is less than 0.3%. We also find that the errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.

  12. Experiences of and support for nurses as second victims of adverse nursing errors: a qualitative systematic review.

    Science.gov (United States)

    Cabilan, C J; Kynoch, Kathryn

    2017-09-01

    Second victims are clinicians who have made adverse errors and feel traumatized by the experience. The current published literature on second victims is mainly representative of doctors, hence nurses' experiences are not fully depicted. This systematic review was necessary to understand the second victim experience for nurses, explore the support provided, and recommend appropriate support systems for nurses. To synthesize the best available evidence on nurses' experiences as second victims, and explore their experiences of the support they receive and the support they need. Participants were registered nurses who made adverse errors. The review included studies that described nurses' experiences as second victims and/or the support they received after making adverse errors. All studies conducted in any health care settings worldwide. The qualitative studies included were grounded theory, discourse analysis and phenomenology. A structured search strategy was used to locate all unpublished and published qualitative studies, but was limited to the English language, and published between 1980 and February 2017. The references of studies selected for eligibility screening were hand-searched for additional literature. Eligible studies were assessed by two independent reviewers for methodological quality using a standardized critical appraisal instrument from the Joanna Briggs Institute Qualitative Assessment and Review Instrument (JBI QARI). Themes and narrative statements were extracted from papers included in the review using the standardized data extraction tool from JBI QARI. Data synthesis was conducted using the Joanna Briggs Institute meta-aggregation approach. There were nine qualitative studies included in the review. The narratives of 284 nurses generated a total of 43 findings, which formed 15 categories based on similarity of meaning. Four synthesized findings were generated from the categories: (i) The error brings a considerable emotional burden to the

  13. Estimation of the limit of detection with a bootstrap-derived standard error by a partly non-parametric approach. Application to HPLC drug assays

    DEFF Research Database (Denmark)

    Linnet, Kristian

    2005-01-01

    Bootstrap, HPLC, limit of blank, limit of detection, non-parametric statistics, type I and II errors......Bootstrap, HPLC, limit of blank, limit of detection, non-parametric statistics, type I and II errors...

  14. Barriers to reporting medication errors and near misses among nurses: A systematic review.

    Science.gov (United States)

    Vrbnjak, Dominika; Denieffe, Suzanne; O'Gorman, Claire; Pajnkihar, Majda

    2016-11-01

    To explore barriers to nurses' reporting of medication errors and near misses in hospital settings. Systematic review. Medline, CINAHL, PubMed and Cochrane Library in addition to Google and Google Scholar and reference lists of relevant studies published in English between January 1981 and April 2015 were searched for relevant qualitative, quantitative or mixed methods empirical studies or unpublished PhD theses. Papers with a primary focus on barriers to reporting medication errors and near misses in nursing were included. The titles and abstracts of the search results were assessed for eligibility and relevance by one of the authors. After retrieval of the full texts, two of the authors independently made decisions concerning the final inclusion and these were validated by the third reviewer. Three authors independently assessed methodological quality of studies. Relevant data were extracted and findings were synthesised using thematic synthesis. From 4038 identified records, 38 studies were included in the synthesis. Findings suggest that organizational barriers such as culture, the reporting system and management behaviour in addition to personal and professional barriers such as fear, accountability and characteristics of nurses are barriers to reporting medication errors. To overcome reported barriers it is necessary to develop a non-blaming, non-punitive and non-fearful learning culture at unit and organizational level. Anonymous, effective, uncomplicated and efficient reporting systems and supportive management behaviour that provides open feedback to nurses is needed. Nurses are accountable for patients' safety, so they need to be educated and skilled in error management. Lack of research into barriers to reporting of near misses' and low awareness of reporting suggests the need for further research and development of educational and management approaches to overcome these barriers. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Evaluating IMRT and VMAT dose accuracy: Practical examples of failure to detect systematic errors when applying a commonly used metric and action levels

    Energy Technology Data Exchange (ETDEWEB)

    Nelms, Benjamin E. [Canis Lupus LLC, Merrimac, Wisconsin 53561 (United States); Chan, Maria F. [Memorial Sloan-Kettering Cancer Center, Basking Ridge, New Jersey 07920 (United States); Jarry, Geneviève; Lemire, Matthieu [Hôpital Maisonneuve-Rosemont, Montréal, QC H1T 2M4 (Canada); Lowden, John [Indiana University Health - Goshen Hospital, Goshen, Indiana 46526 (United States); Hampton, Carnell [Levine Cancer Institute/Carolinas Medical Center, Concord, North Carolina 28025 (United States); Feygelman, Vladimir [Moffitt Cancer Center, Tampa, Florida 33612 (United States)

    2013-11-15

    Purpose: This study (1) examines a variety of real-world cases where systematic errors were not detected by widely accepted methods for IMRT/VMAT dosimetric accuracy evaluation, and (2) drills-down to identify failure modes and their corresponding means for detection, diagnosis, and mitigation. The primary goal of detailing these case studies is to explore different, more sensitive methods and metrics that could be used more effectively for evaluating accuracy of dose algorithms, delivery systems, and QA devices.Methods: The authors present seven real-world case studies representing a variety of combinations of the treatment planning system (TPS), linac, delivery modality, and systematic error type. These case studies are typical to what might be used as part of an IMRT or VMAT commissioning test suite, varying in complexity. Each case study is analyzed according to TG-119 instructions for gamma passing rates and action levels for per-beam and/or composite plan dosimetric QA. Then, each case study is analyzed in-depth with advanced diagnostic methods (dose profile examination, EPID-based measurements, dose difference pattern analysis, 3D measurement-guided dose reconstruction, and dose grid inspection) and more sensitive metrics (2% local normalization/2 mm DTA and estimated DVH comparisons).Results: For these case studies, the conventional 3%/3 mm gamma passing rates exceeded 99% for IMRT per-beam analyses and ranged from 93.9% to 100% for composite plan dose analysis, well above the TG-119 action levels of 90% and 88%, respectively. However, all cases had systematic errors that were detected only by using advanced diagnostic techniques and more sensitive metrics. The systematic errors caused variable but noteworthy impact, including estimated target dose coverage loss of up to 5.5% and local dose deviations up to 31.5%. Types of errors included TPS model settings, algorithm limitations, and modeling and alignment of QA phantoms in the TPS. Most of the errors were

  16. Evaluation of error bands and confidence limits for thermal measurements in the CFTL bundle

    International Nuclear Information System (INIS)

    Childs, K.W.; Sanders, J.P.; Conklin, J.C.

    1979-01-01

    Surface cladding temperatures for the fuel rod simulators in the Core Flow Test Loop (CFTL) must be inferred from a measurement at a thermocouple junction within the rod. This step requires the evaluation of the thermal field within the rod based on known parameters such as heat generation rate, dimensional tolerances, thermal properties, and contact coefficients. Uncertainties in the surface temperature can be evaluated by assigning error bands to each of the parameters used in the calculation. A statistical method has been employed to establish the confidence limits for the surface temperature from a combination of the standard deviations of the important parameters. This method indicates that for a CFTL fuel rod simulator with a total power of 38 kW and a ratio of maximum to average axial power of 1.21, the 95% confidence limit for the calculated surface temperature is +- 45 0 C at the midpoint of the rod

  17. Ptychographic overlap constraint errors and the limits of their numerical recovery using conjugate gradient descent methods.

    Science.gov (United States)

    Tripathi, Ashish; McNulty, Ian; Shpyrko, Oleg G

    2014-01-27

    Ptychographic coherent x-ray diffractive imaging is a form of scanning microscopy that does not require optics to image a sample. A series of scanned coherent diffraction patterns recorded from multiple overlapping illuminated regions on the sample are inverted numerically to retrieve its image. The technique recovers the phase lost by detecting the diffraction patterns by using experimentally known constraints, in this case the measured diffraction intensities and the assumed scan positions on the sample. The spatial resolution of the recovered image of the sample is limited by the angular extent over which the diffraction patterns are recorded and how well these constraints are known. Here, we explore how reconstruction quality degrades with uncertainties in the scan positions. We show experimentally that large errors in the assumed scan positions on the sample can be numerically determined and corrected using conjugate gradient descent methods. We also explore in simulations the limits, based on the signal to noise of the diffraction patterns and amount of overlap between adjacent scan positions, of just how large these errors can be and still be rendered tractable by this method.

  18. Managing Systematic Errors in a Polarimeter for the Storage Ring EDM Experiment

    Science.gov (United States)

    Stephenson, Edward J.; Storage Ring EDM Collaboration

    2011-05-01

    The EDDA plastic scintillator detector system at the Cooler Synchrotron (COSY) has been used to demonstrate that it is possible using a thick target at the edge of the circulating beam to meet the requirements for a polarimeter to be used in the search for an electric dipole moment on the proton or deuteron. Emphasizing elastic and low Q-value reactions leads to large analyzing powers and, along with thick targets, to efficiencies near 1%. Using only information obtained comparing count rates for oppositely vector-polarized beam states and a calibration of the sensitivity of the polarimeter to rate and geometric changes, the contribution of systematic errors can be suppressed below the level of one part per million.

  19. Simulating systematic errors in X-ray absorption spectroscopy experiments: Sample and beam effects

    Energy Technology Data Exchange (ETDEWEB)

    Curis, Emmanuel [Laboratoire de Biomathematiques, Faculte de Pharmacie, Universite Rene, Descartes (Paris V)-4, Avenue de l' Observatoire, 75006 Paris (France)]. E-mail: emmanuel.curis@univ-paris5.fr; Osan, Janos [KFKI Atomic Energy Research Institute (AEKI)-P.O. Box 49, H-1525 Budapest (Hungary); Falkenberg, Gerald [Hamburger Synchrotronstrahlungslabor (HASYLAB), Deutsches Elektronen-Synchrotron (DESY)-Notkestrasse 85, 22607 Hamburg (Germany); Benazeth, Simone [Laboratoire de Biomathematiques, Faculte de Pharmacie, Universite Rene, Descartes (Paris V)-4, Avenue de l' Observatoire, 75006 Paris (France); Laboratoire d' Utilisation du Rayonnement Electromagnetique (LURE)-Ba-hat timent 209D, Campus d' Orsay, 91406 Orsay (France); Toeroek, Szabina [KFKI Atomic Energy Research Institute (AEKI)-P.O. Box 49, H-1525 Budapest (Hungary)

    2005-07-15

    The article presents an analytical model to simulate experimental imperfections in the realization of an X-ray absorption spectroscopy experiment, performed in transmission or fluorescence mode. Distinction is made between sources of systematic errors on a time-scale basis, to select the more appropriate model for their handling. For short time-scale, statistical models are the most suited. For large time-scale, the model is developed for sample and beam imperfections: mainly sample inhomogeneity, sample self-absorption, beam achromaticity. The ability of this model to reproduce the effects of these imperfections is exemplified, and the model is validated on real samples. Various potential application fields of the model are then presented.

  20. Simulating systematic errors in X-ray absorption spectroscopy experiments: Sample and beam effects

    International Nuclear Information System (INIS)

    Curis, Emmanuel; Osan, Janos; Falkenberg, Gerald; Benazeth, Simone; Toeroek, Szabina

    2005-01-01

    The article presents an analytical model to simulate experimental imperfections in the realization of an X-ray absorption spectroscopy experiment, performed in transmission or fluorescence mode. Distinction is made between sources of systematic errors on a time-scale basis, to select the more appropriate model for their handling. For short time-scale, statistical models are the most suited. For large time-scale, the model is developed for sample and beam imperfections: mainly sample inhomogeneity, sample self-absorption, beam achromaticity. The ability of this model to reproduce the effects of these imperfections is exemplified, and the model is validated on real samples. Various potential application fields of the model are then presented

  1. Analysis of operator splitting errors for near-limit flame simulations

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Zhen; Zhou, Hua [Center for Combustion Energy, Tsinghua University, Beijing 100084 (China); Li, Shan [Center for Combustion Energy, Tsinghua University, Beijing 100084 (China); School of Aerospace Engineering, Tsinghua University, Beijing 100084 (China); Ren, Zhuyin, E-mail: zhuyinren@tsinghua.edu.cn [Center for Combustion Energy, Tsinghua University, Beijing 100084 (China); School of Aerospace Engineering, Tsinghua University, Beijing 100084 (China); Lu, Tianfeng [Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269-3139 (United States); Law, Chung K. [Center for Combustion Energy, Tsinghua University, Beijing 100084 (China); Department of Mechanical and Aerospace Engineering, Princeton University, Princeton, NJ 08544 (United States)

    2017-04-15

    High-fidelity simulations of ignition, extinction and oscillatory combustion processes are of practical interest in a broad range of combustion applications. Splitting schemes, widely employed in reactive flow simulations, could fail for stiff reaction–diffusion systems exhibiting near-limit flame phenomena. The present work first employs a model perfectly stirred reactor (PSR) problem with an Arrhenius reaction term and a linear mixing term to study the effects of splitting errors on the near-limit combustion phenomena. Analysis shows that the errors induced by decoupling of the fractional steps may result in unphysical extinction or ignition. The analysis is then extended to the prediction of ignition, extinction and oscillatory combustion in unsteady PSRs of various fuel/air mixtures with a 9-species detailed mechanism for hydrogen oxidation and an 88-species skeletal mechanism for n-heptane oxidation, together with a Jacobian-based analysis for the time scales. The tested schemes include the Strang splitting, the balanced splitting, and a newly developed semi-implicit midpoint method. Results show that the semi-implicit midpoint method can accurately reproduce the dynamics of the near-limit flame phenomena and it is second-order accurate over a wide range of time step size. For the extinction and ignition processes, both the balanced splitting and midpoint method can yield accurate predictions, whereas the Strang splitting can lead to significant shifts on the ignition/extinction processes or even unphysical results. With an enriched H radical source in the inflow stream, a delay of the ignition process and the deviation on the equilibrium temperature are observed for the Strang splitting. On the contrary, the midpoint method that solves reaction and diffusion together matches the fully implicit accurate solution. The balanced splitting predicts the temperature rise correctly but with an over-predicted peak. For the sustainable and decaying oscillatory

  2. GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology

    Energy Technology Data Exchange (ETDEWEB)

    Mandelbaum, R.; Rowe, B.; Armstrong, R.; Bard, D.; Bertin, E.; Bosch, J.; Boutigny, D.; Courbin, F.; Dawson, W. A.; Donnarumma, A.; Fenech Conti, I.; Gavazzi, R.; Gentile, M.; Gill, M. S. S.; Hogg, D. W.; Huff, E. M.; Jee, M. J.; Kacprzak, T.; Kilbinger, M.; Kuntzer, T.; Lang, D.; Luo, W.; March, M. C.; Marshall, P. J.; Meyers, J. E.; Miller, L.; Miyatake, H.; Nakajima, R.; Ngole Mboula, F. M.; Nurbaeva, G.; Okura, Y.; Paulin-Henriksson, S.; Rhodes, J.; Schneider, M. D.; Shan, H.; Sheldon, E. S.; Simet, M.; Starck, J. -L.; Sureau, F.; Tewes, M.; Zarb Adami, K.; Zhang, J.; Zuntz, J.

    2015-05-01

    We present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods’ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.

  3. Towards eliminating systematic errors caused by the experimental conditions in Biochemical Methane Potential (BMP) tests

    International Nuclear Information System (INIS)

    Strömberg, Sten; Nistor, Mihaela; Liu, Jing

    2014-01-01

    Highlights: • The evaluated factors introduce significant systematic errors (10–38%) in BMP tests. • Ambient temperature (T) has the most substantial impact (∼10%) at low altitude. • Ambient pressure (p) has the most substantial impact (∼68%) at high altitude. • Continuous monitoring of T and p is not necessary for kinetic calculations. - Abstract: The Biochemical Methane Potential (BMP) test is increasingly recognised as a tool for selecting and pricing biomass material for production of biogas. However, the results for the same substrate often differ between laboratories and much work to standardise such tests is still needed. In the current study, the effects from four environmental factors (i.e. ambient temperature and pressure, water vapour content and initial gas composition of the reactor headspace) on the degradation kinetics and the determined methane potential were evaluated with a 2 4 full factorial design. Four substrates, with different biodegradation profiles, were investigated and the ambient temperature was found to be the most significant contributor to errors in the methane potential. Concerning the kinetics of the process, the environmental factors’ impact on the calculated rate constants was negligible. The impact of the environmental factors on the kinetic parameters and methane potential from performing a BMP test at different geographical locations around the world was simulated by adjusting the data according to the ambient temperature and pressure of some chosen model sites. The largest effect on the methane potential was registered from tests performed at high altitudes due to a low ambient pressure. The results from this study illustrate the importance of considering the environmental factors’ influence on volumetric gas measurement in BMP tests. This is essential to achieve trustworthy and standardised results that can be used by researchers and end users from all over the world

  4. Towards eliminating systematic errors caused by the experimental conditions in Biochemical Methane Potential (BMP) tests

    Energy Technology Data Exchange (ETDEWEB)

    Strömberg, Sten, E-mail: sten.stromberg@biotek.lu.se [Department of Biotechnology, Lund University, Getingevägen 60, 221 00 Lund (Sweden); Nistor, Mihaela, E-mail: mn@bioprocesscontrol.com [Bioprocess Control, Scheelevägen 22, 223 63 Lund (Sweden); Liu, Jing, E-mail: jing.liu@biotek.lu.se [Department of Biotechnology, Lund University, Getingevägen 60, 221 00 Lund (Sweden); Bioprocess Control, Scheelevägen 22, 223 63 Lund (Sweden)

    2014-11-15

    Highlights: • The evaluated factors introduce significant systematic errors (10–38%) in BMP tests. • Ambient temperature (T) has the most substantial impact (∼10%) at low altitude. • Ambient pressure (p) has the most substantial impact (∼68%) at high altitude. • Continuous monitoring of T and p is not necessary for kinetic calculations. - Abstract: The Biochemical Methane Potential (BMP) test is increasingly recognised as a tool for selecting and pricing biomass material for production of biogas. However, the results for the same substrate often differ between laboratories and much work to standardise such tests is still needed. In the current study, the effects from four environmental factors (i.e. ambient temperature and pressure, water vapour content and initial gas composition of the reactor headspace) on the degradation kinetics and the determined methane potential were evaluated with a 2{sup 4} full factorial design. Four substrates, with different biodegradation profiles, were investigated and the ambient temperature was found to be the most significant contributor to errors in the methane potential. Concerning the kinetics of the process, the environmental factors’ impact on the calculated rate constants was negligible. The impact of the environmental factors on the kinetic parameters and methane potential from performing a BMP test at different geographical locations around the world was simulated by adjusting the data according to the ambient temperature and pressure of some chosen model sites. The largest effect on the methane potential was registered from tests performed at high altitudes due to a low ambient pressure. The results from this study illustrate the importance of considering the environmental factors’ influence on volumetric gas measurement in BMP tests. This is essential to achieve trustworthy and standardised results that can be used by researchers and end users from all over the world.

  5. Systematic instrumental errors between oxygen saturation analysers in fetal blood during deep hypoxemia.

    Science.gov (United States)

    Porath, M; Sinha, P; Dudenhausen, J W; Luttkus, A K

    2001-05-01

    During a study of artificially produced deep hypoxemia in fetal cord blood, systematic errors of three different oxygen saturation analysers were evaluated against a reference CO oximeter. The oxygen tensions (PO2) of 83 pre-heparinized fetal blood samples from umbilical veins were reduced by tonometry to 1.3 kPa (10 mm Hg) and 2.7 kPa (20 mm Hg). The oxygen saturation (SO2) was determined (n=1328) on a reference CO oximeter (ABL625, Radiometer Copenhagen) and on three tested instruments (two CO oximeters: Chiron865, Bayer Diagnostics; ABL700, Radiometer Copenhagen, and a portable blood gas analyser, i-STAT, Abbott). The CO oximeters measure the oxyhemoglobin and the reduced hemoglobin fractions by absorption spectrophotometry. The i-STAT system calculates the oxygen saturation from the measured pH, PO2, and PCO2. The measurements were performed in duplicate. Statistical evaluation focused on the differences between duplicate measurements and on systematic instrumental errors in oxygen saturation analysis compared to the reference CO oximeter. After tonometry, the median saturation dropped to 32.9% at a PO2=2.7 kPa (20 mm Hg), defined as saturation range 1, and to 10% SO2 at a PO2=1.3 kPa (10 mm Hg), defined as range 2. With decreasing SO2, all devices showed an increased difference between duplicate measurements. ABL625 and ABL700 showed the closest agreement between instruments (0.25% SO2 bias at saturation range 1 and -0.33% SO2 bias at saturation range 2). Chiron865 indicated higher saturation values than ABL 625 (3.07% SO2 bias at saturation range 1 and 2.28% SO2 bias at saturation range 2). Calculated saturation values (i-STAT) were more than 30% lower than the measured values of ABL625. The disagreement among CO oximeters was small but increasing under deep hypoxemia. Calculation found unacceptably low saturation.

  6. Avoiding a Systematic Error in Assessing Fat Graft Survival in the Breast with Repeated Magnetic Resonance Imaging

    DEFF Research Database (Denmark)

    Glovinski, Peter Viktor; Herly, Mikkel; Müller, Felix C

    2016-01-01

    Several techniques for measuring breast volume (BV) are based on examining the breast on magnetic resonance imaging. However, when techniques designed to measure total BV are used to quantify BV changes, for example, after fat grafting, a systematic error is introduced because BV changes lead to ...

  7. Convolution method and CTV-to-PTV margins for finite fractions and small systematic errors

    International Nuclear Information System (INIS)

    Gordon, J J; Siebers, J V

    2007-01-01

    The van Herk margin formula (VHMF) relies on the accuracy of the convolution method (CM) to determine clinical target volume (CTV) to planning target volume (PTV) margins. This work (1) evaluates the accuracy of the CM and VHMF as a function of the number of fractions N and other parameters, and (2) proposes an alternative margin algorithm which ensures target coverage for a wider range of parameter values. Dose coverage was evaluated for a spherical target with uniform margin, using the same simplified dose model and CTV coverage criterion as were used in development of the VHMF. Systematic and random setup errors were assumed to be normally distributed with standard deviations Σ and σ. For clinically relevant combinations of σ, Σ and N, margins were determined by requiring that 90% of treatment course simulations have a CTV minimum dose greater than or equal to the static PTV minimum dose. Simulation results were compared with the VHMF and the alternative margin algorithm. The CM and VHMF were found to be accurate for parameter values satisfying the approximate criterion: σ[1 - γN/25] 0.2, because they failed to account for the non-negligible dose variability associated with random setup errors. These criteria are applicable when σ ∼> σ P , where σ P = 0.32 cm is the standard deviation of the normal dose penumbra. (Qualitative behaviour of the CM and VHMF will remain the same, though the criteria might vary if σ P takes values other than 0.32 cm.) When σ P , dose variability due to random setup errors becomes negligible, and the CM and VHMF are valid regardless of the values of Σ and N. When σ ∼> σ P , consistent with the above criteria, it was found that the VHMF can underestimate margins for large σ, small Σ and small N. A potential consequence of this underestimate is that the CTV minimum dose can fall below its planned value in more than the prescribed 10% of treatments. The proposed alternative margin algorithm provides better margin

  8. Edge profile analysis of Joint European Torus (JET) Thomson scattering data: Quantifying the systematic error due to edge localised mode synchronisation.

    Science.gov (United States)

    Leyland, M J; Beurskens, M N A; Flanagan, J C; Frassinetti, L; Gibson, K J; Kempenaars, M; Maslov, M; Scannell, R

    2016-01-01

    The Joint European Torus (JET) high resolution Thomson scattering (HRTS) system measures radial electron temperature and density profiles. One of the key capabilities of this diagnostic is measuring the steep pressure gradient, termed the pedestal, at the edge of JET plasmas. The pedestal is susceptible to limiting instabilities, such as Edge Localised Modes (ELMs), characterised by a periodic collapse of the steep gradient region. A common method to extract the pedestal width, gradient, and height, used on numerous machines, is by performing a modified hyperbolic tangent (mtanh) fit to overlaid profiles selected from the same region of the ELM cycle. This process of overlaying profiles, termed ELM synchronisation, maximises the number of data points defining the pedestal region for a given phase of the ELM cycle. When fitting to HRTS profiles, it is necessary to incorporate the diagnostic radial instrument function, particularly important when considering the pedestal width. A deconvolved fit is determined by a forward convolution method requiring knowledge of only the instrument function and profiles. The systematic error due to the deconvolution technique incorporated into the JET pedestal fitting tool has been documented by Frassinetti et al. [Rev. Sci. Instrum. 83, 013506 (2012)]. This paper seeks to understand and quantify the systematic error introduced to the pedestal width due to ELM synchronisation. Synthetic profiles, generated with error bars and point-to-point variation characteristic of real HRTS profiles, are used to evaluate the deviation from the underlying pedestal width. We find on JET that the ELM synchronisation systematic error is negligible in comparison to the statistical error when assuming ten overlaid profiles (typical for a pre-ELM fit to HRTS profiles). This confirms that fitting a mtanh to ELM synchronised profiles is a robust and practical technique for extracting the pedestal structure.

  9. The Thirty Gigahertz Instrument Receiver for the QUIJOTE Experiment: Preliminary Polarization Measurements and Systematic-Error Analysis

    Directory of Open Access Journals (Sweden)

    Francisco J. Casas

    2015-08-01

    Full Text Available This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process.

  10. A permutation test to analyse systematic bias and random measurement errors of medical devices via boosting location and scale models.

    Science.gov (United States)

    Mayr, Andreas; Schmid, Matthias; Pfahlberg, Annette; Uter, Wolfgang; Gefeller, Olaf

    2017-06-01

    Measurement errors of medico-technical devices can be separated into systematic bias and random error. We propose a new method to address both simultaneously via generalized additive models for location, scale and shape (GAMLSS) in combination with permutation tests. More precisely, we extend a recently proposed boosting algorithm for GAMLSS to provide a test procedure to analyse potential device effects on the measurements. We carried out a large-scale simulation study to provide empirical evidence that our method is able to identify possible sources of systematic bias as well as random error under different conditions. Finally, we apply our approach to compare measurements of skin pigmentation from two different devices in an epidemiological study.

  11. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  12. Measuring nuclear-spin-dependent parity violation with molecules: Experimental methods and analysis of systematic errors

    Science.gov (United States)

    Altuntaş, Emine; Ammon, Jeffrey; Cahn, Sidney B.; DeMille, David

    2018-04-01

    Nuclear-spin-dependent parity violation (NSD-PV) effects in atoms and molecules arise from Z0 boson exchange between electrons and the nucleus and from the magnetic interaction between electrons and the parity-violating nuclear anapole moment. It has been proposed to study NSD-PV effects using an enhancement of the observable effect in diatomic molecules [D. DeMille et al., Phys. Rev. Lett. 100, 023003 (2008), 10.1103/PhysRevLett.100.023003]. Here we demonstrate highly sensitive measurements of this type, using the test system 138Ba19F. We show that systematic errors associated with our technique can be suppressed to at least the level of the present statistical sensitivity. With ˜170 h of data, we measure the matrix element W of the NSD-PV interaction with uncertainty δ W /(2 π )<0.7 Hz for each of two configurations where W must have different signs. This sensitivity would be sufficient to measure NSD-PV effects of the size anticipated across a wide range of nuclei.

  13. Measurement of Systematic Error Effects for a Sensitive Storage Ring EDM Polarimeter

    Science.gov (United States)

    Imig, Astrid; Stephenson, Edward

    2009-10-01

    The Storage Ring EDM Collaboration was using the Cooler Synchrotron (COSY) and the EDDA detector at the Forschungszentrum J"ulich to explore systematic errors in very sensitive storage-ring polarization measurements. Polarized deuterons of 235 MeV were used. The analyzer target was a block of 17 mm thick carbon placed close to the beam so that white noise applied to upstream electrostatic plates increases the vertical phase space of the beam, allowing deuterons to strike the front face of the block. For a detector acceptance that covers laboratory angles larger than 9 ^o, the efficiency for particles to scatter into the polarimeter detectors was about 0.1% (all directions) and the vector analyzing power was about 0.2. Measurements were made of the sensitivity of the polarization measurement to beam position and angle. Both vector and tensor asymmetries were measured using beams with both vector and tensor polarization. Effects were seen that depend upon both the beam geometry and the data rate in the detectors.

  14. Joint position sense error in people with neck pain: A systematic review.

    Science.gov (United States)

    de Vries, J; Ischebeck, B K; Voogt, L P; van der Geest, J N; Janssen, M; Frens, M A; Kleinrensink, G J

    2015-12-01

    Several studies in recent decades have examined the relationship between proprioceptive deficits and neck pain. However, there is no uniform conclusion on the relationship between the two. Clinically, proprioception is evaluated using the Joint Position Sense Error (JPSE), which reflects a person's ability to accurately return his head to a predefined target after a cervical movement. We focused to differentiate between JPSE in people with neck pain compared to healthy controls. Systematic review according to the PRISMA guidelines. Our data sources were Embase, Medline OvidSP, Web of Science, Cochrane Central, CINAHL and Pubmed Publisher. To be included, studies had to compare JPSE of the neck (O) in people with neck pain (P) with JPSE of the neck in healthy controls (C). Fourteen studies were included. Four studies reported that participants with traumatic neck pain had a significantly higher JPSE than healthy controls. Of the eight studies involving people with non-traumatic neck pain, four reported significant differences between the groups. The JPSE did not vary between neck-pain groups. Current literature shows the JPSE to be a relevant measure when it is used correctly. All studies which calculated the JPSE over at least six trials showed a significantly increased JPSE in the neck pain group. This strongly suggests that 'number of repetitions' is a major element in correctly performing the JPSE test. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. In-Situ Systematic Error Correction for Digital Volume Correlation Using a Reference Sample

    KAUST Repository

    Wang, B.

    2017-11-27

    The self-heating effect of a laboratory X-ray computed tomography (CT) scanner causes slight change in its imaging geometry, which induces translation and dilatation (i.e., artificial displacement and strain) in reconstructed volume images recorded at different times. To realize high-accuracy internal full-field deformation measurements using digital volume correlation (DVC), these artificial displacements and strains associated with unstable CT imaging must be eliminated. In this work, an effective and easily implemented reference sample compensation (RSC) method is proposed for in-situ systematic error correction in DVC. The proposed method utilizes a stationary reference sample, which is placed beside the test sample to record the artificial displacement fields caused by the self-heating effect of CT scanners. The detected displacement fields are then fitted by a parametric polynomial model, which is used to remove the unwanted artificial deformations in the test sample. Rescan tests of a stationary sample and real uniaxial compression tests performed on copper foam specimens demonstrate the accuracy, efficacy, and practicality of the presented RSC method.

  16. In-Situ Systematic Error Correction for Digital Volume Correlation Using a Reference Sample

    KAUST Repository

    Wang, B.; Pan, B.; Lubineau, Gilles

    2017-01-01

    The self-heating effect of a laboratory X-ray computed tomography (CT) scanner causes slight change in its imaging geometry, which induces translation and dilatation (i.e., artificial displacement and strain) in reconstructed volume images recorded at different times. To realize high-accuracy internal full-field deformation measurements using digital volume correlation (DVC), these artificial displacements and strains associated with unstable CT imaging must be eliminated. In this work, an effective and easily implemented reference sample compensation (RSC) method is proposed for in-situ systematic error correction in DVC. The proposed method utilizes a stationary reference sample, which is placed beside the test sample to record the artificial displacement fields caused by the self-heating effect of CT scanners. The detected displacement fields are then fitted by a parametric polynomial model, which is used to remove the unwanted artificial deformations in the test sample. Rescan tests of a stationary sample and real uniaxial compression tests performed on copper foam specimens demonstrate the accuracy, efficacy, and practicality of the presented RSC method.

  17. Diagnostic and therapeutic errors in trigeminal autonomic cephalalgias and hemicrania continua: a systematic review

    Science.gov (United States)

    2013-01-01

    Trigeminal autonomic cephalalgias (TACs) and hemicrania continua (HC) are relatively rare but clinically rather well-defined primary headaches. Despite the existence of clear-cut diagnostic criteria (The International Classification of Headache Disorders, 2nd edition - ICHD-II) and several therapeutic guidelines, errors in workup and treatment of these conditions are frequent in clinical practice. We set out to review all available published data on mismanagement of TACs and HC patients in order to understand and avoid its causes. The search strategy identified 22 published studies. The most frequent errors described in the management of patients with TACs and HC are: referral to wrong type of specialist, diagnostic delay, misdiagnosis, and the use of treatments without overt indication. Migraine with and without aura, trigeminal neuralgia, sinus infection, dental pain and temporomandibular dysfunction are the disorders most frequently overdiagnosed. Even when the clinical picture is clear-cut, TACs and HC are frequently not recognized and/or mistaken for other disorders, not only by general physicians, dentists and ENT surgeons, but also by neurologists and headache specialists. This seems to be due to limited knowledge of the specific characteristics and variants of these disorders, and it results in the unnecessary prescription of ineffective and sometimes invasive treatments which may have negative consequences for patients. Greater knowledge of and education about these disorders, among both primary care physicians and headache specialists, might contribute to improving the quality of life of TACs and HC patients. PMID:23565739

  18. ASSESSMENT OF SYSTEMATIC CHROMATIC ERRORS THAT IMPACT SUB-1% PHOTOMETRIC PRECISION IN LARGE-AREA SKY SURVEYS

    Energy Technology Data Exchange (ETDEWEB)

    Li, T. S.; DePoy, D. L.; Marshall, J. L.; Boada, S.; Mondrik, N.; Nagasawa, D. [George P. and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy, and Department of Physics and Astronomy, Texas A and M University, College Station, TX 77843 (United States); Tucker, D.; Annis, J.; Finley, D. A.; Kent, S.; Lin, H.; Marriner, J.; Wester, W. [Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, IL 60510 (United States); Kessler, R.; Scolnic, D. [Kavli Institute for Cosmological Physics, University of Chicago, Chicago, IL 60637 (United States); Bernstein, G. M. [Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA 19104 (United States); Burke, D. L.; Rykoff, E. S. [SLAC National Accelerator Laboratory, Menlo Park, CA 94025 (United States); James, D. J.; Walker, A. R. [Cerro Tololo Inter-American Observatory, National Optical Astronomy Observatory, Casilla 603, La Serena (Chile); Collaboration: DES Collaboration; and others

    2016-06-01

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey’s stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. The residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for

  19. Systematic errors in transport calculations of shear viscosity using the Green-Kubo formalism

    Science.gov (United States)

    Rose, J. B.; Torres-Rincon, J. M.; Oliinychenko, D.; Schäfer, A.; Petersen, H.

    2018-05-01

    The purpose of this study is to provide a reproducible framework in the use of the Green-Kubo formalism to extract transport coefficients. More specifically, in the case of shear viscosity, we investigate the limitations and technical details of fitting the auto-correlation function to a decaying exponential. This fitting procedure is found to be applicable for systems interacting both through constant and energy-dependent cross-sections, although this is only true for sufficiently dilute systems in the latter case. We find that the optimal fit technique consists in simultaneously fixing the intercept of the correlation function and use a fitting interval constrained by the relative error on the correlation function. The formalism is then applied to the full hadron gas, for which we obtain the shear viscosity to entropy ratio.

  20. Increased errors and decreased performance at night: A systematic review of the evidence concerning shift work and quality.

    Science.gov (United States)

    de Cordova, Pamela B; Bradford, Michelle A; Stone, Patricia W

    2016-02-15

    Shift workers have worse health outcomes than employees who work standard business hours. However, it is unclear how this poorer health shift may be related to employee work productivity. The purpose of this systematic review is to assess the relationship between shift work and errors and performance. Searches of MEDLINE/PubMed, EBSCOhost, and CINAHL were conducted to identify articles that examined the relationship between shift work, errors, quality, productivity, and performance. All articles were assessed for study quality. A total of 435 abstracts were screened with 13 meeting inclusion criteria. Eight studies were rated to be of strong, methodological quality. Nine studies demonstrated a positive relationship that night shift workers committed more errors and had decreased performance. Night shift workers have worse health that may contribute to errors and decreased performance in the workplace.

  1. Mapping the absolute magnetic field and evaluating the quadratic Zeeman-effect-induced systematic error in an atom interferometer gravimeter

    Science.gov (United States)

    Hu, Qing-Qing; Freier, Christian; Leykauf, Bastian; Schkolnik, Vladimir; Yang, Jun; Krutzik, Markus; Peters, Achim

    2017-09-01

    Precisely evaluating the systematic error induced by the quadratic Zeeman effect is important for developing atom interferometer gravimeters aiming at an accuracy in the μ Gal regime (1 μ Gal =10-8m /s2 ≈10-9g ). This paper reports on the experimental investigation of Raman spectroscopy-based magnetic field measurements and the evaluation of the systematic error in the gravimetric atom interferometer (GAIN) due to quadratic Zeeman effect. We discuss Raman duration and frequency step-size-dependent magnetic field measurement uncertainty, present vector light shift and tensor light shift induced magnetic field measurement offset, and map the absolute magnetic field inside the interferometer chamber of GAIN with an uncertainty of 0.72 nT and a spatial resolution of 12.8 mm. We evaluate the quadratic Zeeman-effect-induced gravity measurement error in GAIN as 2.04 μ Gal . The methods shown in this paper are important for precisely mapping the absolute magnetic field in vacuum and reducing the quadratic Zeeman-effect-induced systematic error in Raman transition-based precision measurements, such as atomic interferometer gravimeters.

  2. How are medication errors defined? A systematic literature review of definitions and characteristics

    DEFF Research Database (Denmark)

    Lisby, Marianne; Nielsen, L P; Brock, Birgitte

    2010-01-01

    Multiplicity in terminology has been suggested as a possible explanation for the variation in the prevalence of medication errors. So far, few empirical studies have challenged this assertion. The objective of this review was, therefore, to describe the extent and characteristics of medication er...... error definitions in hospitals and to consider the consequences for measuring the prevalence of medication errors....

  3. Combined influence of CT random noise and HU-RSP calibration curve nonlinearities on proton range systematic errors

    Science.gov (United States)

    Brousmiche, S.; Souris, K.; Orban de Xivry, J.; Lee, J. A.; Macq, B.; Seco, J.

    2017-11-01

    Proton range random and systematic uncertainties are the major factors undermining the advantages of proton therapy, namely, a sharp dose falloff and a better dose conformality for lower doses in normal tissues. The influence of CT artifacts such as beam hardening or scatter can easily be understood and estimated due to their large-scale effects on the CT image, like cupping and streaks. In comparison, the effects of weakly-correlated stochastic noise are more insidious and less attention is drawn on them partly due to the common belief that they only contribute to proton range uncertainties and not to systematic errors thanks to some averaging effects. A new source of systematic errors on the range and relative stopping powers (RSP) has been highlighted and proved not to be negligible compared to the 3.5% uncertainty reference value used for safety margin design. Hence, we demonstrate that the angular points in the HU-to-RSP calibration curve are an intrinsic source of proton range systematic error for typical levels of zero-mean stochastic CT noise. Systematic errors on RSP of up to 1% have been computed for these levels. We also show that the range uncertainty does not generally vary linearly with the noise standard deviation. We define a noise-dependent effective calibration curve that better describes, for a given material, the RSP value that is actually used. The statistics of the RSP and the range continuous slowing down approximation (CSDA) have been analytically derived for the general case of a calibration curve obtained by the stoichiometric calibration procedure. These models have been validated against actual CSDA simulations for homogeneous and heterogeneous synthetical objects as well as on actual patient CTs for prostate and head-and-neck treatment planning situations.

  4. Scale interactions on diurnal toseasonal timescales and their relevanceto model systematic errors

    Directory of Open Access Journals (Sweden)

    G. Yang

    2003-06-01

    Full Text Available Examples of current research into systematic errors in climate models are used to demonstrate the importance of scale interactions on diurnal,intraseasonal and seasonal timescales for the mean and variability of the tropical climate system. It has enabled some conclusions to be drawn about possible processes that may need to be represented, and some recommendations to be made regarding model improvements. It has been shown that the Maritime Continent heat source is a major driver of the global circulation but yet is poorly represented in GCMs. A new climatology of the diurnal cycle has been used to provide compelling evidence of important land-sea breeze and gravity wave effects, which may play a crucial role in the heat and moisture budget of this key region for the tropical and global circulation. The role of the diurnal cycle has also been emphasized for intraseasonal variability associated with the Madden Julian Oscillation (MJO. It is suggested that the diurnal cycle in Sea Surface Temperature (SST during the suppressed phase of the MJO leads to a triggering of cumulus congestus clouds, which serve to moisten the free troposphere and hence precondition the atmosphere for the next active phase. It has been further shown that coupling between the ocean and atmosphere on intraseasonal timescales leads to a more realistic simulation of the MJO. These results stress the need for models to be able to simulate firstly, the observed tri-modal distribution of convection, and secondly, the coupling between the ocean and atmosphere on diurnal to intraseasonal timescales. It is argued, however, that the current representation of the ocean mixed layer in coupled models is not adequate to represent the complex structure of the observed mixed layer, in particular the formation of salinity barrier layers which can potentially provide much stronger local coupling between the atmosphere and ocean on diurnal to intraseasonal timescales.

  5. Strategies to reduce the systematic error due to tumor and rectum motion in radiotherapy of prostate cancer

    International Nuclear Information System (INIS)

    Hoogeman, Mischa S.; Herk, Marcel van; Bois, Josien de; Lebesque, Joos V.

    2005-01-01

    Background and purpose: The goal of this work is to develop and evaluate strategies to reduce the uncertainty in the prostate position and rectum shape that arises in the preparation stage of the radiation treatment of prostate cancer. Patients and methods: Nineteen prostate cancer patients, who were treated with 3-dimensional conformal radiotherapy, received each a planning CT scan and 8-13 repeat CT scans during the treatment period. We quantified prostate motion relative to the pelvic bone by first matching the repeat CT scans on the planning CT scan using the bony anatomy. Subsequently, each contoured prostate, including seminal vesicles, was matched on the prostate in the planning CT scan to obtain the translations and rotations. The variation in prostate position was determined in terms of the systematic, random and group mean error. We tested the performance of two correction strategies to reduce the systematic error due to prostate motion. The first strategy, the pre-treatment strategy, used only the initial rectum volume in the planning CT scan to adjust the angle of the prostate with respect to the left-right (LR) axis and the shape and position of the rectum. The second strategy, the adaptive strategy, used the data of repeat CT scans to improve the estimate of the prostate position and rectum shape during the treatment. Results: The largest component of prostate motion was a rotation around the LR axis. The systematic error (1 SD) was 5.1 deg and the random error was 3.6 deg (1 SD). The average LR-axis rotation between the planning and the repeat CT scans correlated significantly with the rectum volume in the planning CT scan (r=0.86, P<0.0001). Correction of the rotational position on the basis of the planning rectum volume alone reduced the systematic error by 28%. A correction, based on the data of the planning CT scan and 4 repeat CT scans reduced the systematic error over the complete treatment period by a factor of 2. When the correction was

  6. Error of semiclassical eigenvalues in the semiclassical limit - an asymptotic analysis of the Sinai billiard

    Science.gov (United States)

    Dahlqvist, Per

    1999-10-01

    We estimate the error in the semiclassical trace formula for the Sinai billiard under the assumption that the largest source of error is due to penumbra diffraction: namely, diffraction effects for trajectories passing within a distance Ricons/Journals/Common/cdot" ALT="cdot" ALIGN="TOP"/>O((kR)-2/3) to the disc and trajectories being scattered in very forward directions. Here k is the momentum and R the radius of the scatterer. The semiclassical error is estimated by perturbing the Berry-Keating formula. The analysis necessitates an asymptotic analysis of very long periodic orbits. This is obtained within an approximation originally due to Baladi, Eckmann and Ruelle. We find that the average error, for sufficiently large values of kR, will exceed the mean level spacing.

  7. Systematic errors in the readings of track etch neutron dosemeters caused by the energy dependence of response

    International Nuclear Information System (INIS)

    Tanner, R.J.; Thomas, D.J.; Bartlett, D.T.; Horwood, N.

    1999-01-01

    A study has been performed to assess the extent to which variations in the energy dependence of response of neutron personal dosemeters can cause systematic errors in readings obtained in workplace fields. This involved a detailed determination of the response functions of personal dosemeters used in the UK. These response functions were folded with workplace spectra to ascertain the under- or over-response in workplace fields

  8. Systematic errors in the readings of track etch neutron dosemeters caused by the energy dependence of response

    CERN Document Server

    Tanner, R J; Bartlett, D T; Horwood, N

    1999-01-01

    A study has been performed to assess the extent to which variations in the energy dependence of response of neutron personal dosemeters can cause systematic errors in readings obtained in workplace fields. This involved a detailed determination of the response functions of personal dosemeters used in the UK. These response functions were folded with workplace spectra to ascertain the under- or over-response in workplace fields.

  9. Variation across mitochondrial gene trees provides evidence for systematic error: How much gene tree variation is biological?

    Science.gov (United States)

    Richards, Emilie J; Brown, Jeremy M; Barley, Anthony J; Chong, Rebecca A; Thomson, Robert C

    2018-02-19

    The use of large genomic datasets in phylogenetics has highlighted extensive topological variation across genes. Much of this discordance is assumed to result from biological processes. However, variation among gene trees can also be a consequence of systematic error driven by poor model fit, and the relative importance of biological versus methodological factors in explaining gene tree variation is a major unresolved question. Using mitochondrial genomes to control for biological causes of gene tree variation, we estimate the extent of gene tree discordance driven by systematic error and employ posterior prediction to highlight the role of model fit in producing this discordance. We find that the amount of discordance among mitochondrial gene trees is similar to the amount of discordance found in other studies that assume only biological causes of variation. This similarity suggests that the role of systematic error in generating gene tree variation is underappreciated and critical evaluation of fit between assumed models and the data used for inference is important for the resolution of unresolved phylogenetic questions.

  10. Nature versus nurture: A systematic approach to elucidate gene-environment interactions in the development of myopic refractive errors.

    Science.gov (United States)

    Miraldi Utz, Virginia

    2017-01-01

    Myopia is the most common eye disorder and major cause of visual impairment worldwide. As the incidence of myopia continues to rise, the need to further understand the complex roles of molecular and environmental factors controlling variation in refractive error is of increasing importance. Tkatchenko and colleagues applied a systematic approach using a combination of gene set enrichment analysis, genome-wide association studies, and functional analysis of a murine model to identify a myopia susceptibility gene, APLP2. Differential expression of refractive error was associated with time spent reading for those with low frequency variants in this gene. This provides support for the longstanding hypothesis of gene-environment interactions in refractive error development.

  11. Systematic errors in respiratory gating due to intrafraction deformations of the liver

    International Nuclear Information System (INIS)

    Siebenthal, Martin von; Szekely, Gabor; Lomax, Antony J.; Cattin, Philippe C.

    2007-01-01

    This article shows the limitations of respiratory gating due to intrafraction deformations of the right liver lobe. The variability of organ shape and motion over tens of minutes was taken into account for this evaluation, which closes the gap between short-term analysis of a few regular cycles, as it is possible with 4DCT, and long-term analysis of interfraction motion. Time resolved MR volumes (4D MR sequences) were reconstructed for 12 volunteers and subsequent non-rigid registration provided estimates of the 3D trajectories of points within the liver over time. The full motion during free breathing and its distribution over the liver were quantified and respiratory gating was simulated to determine the gating accuracy for different gating signals, duty cycles, and different intervals between patient setup and treatment. Gating effectively compensated for the respiratory motion within short sequences (3 min), but deformations, mainly in the anterior inferior part (Couinaud segments IVb and V), led to systematic deviations from the setup position of more than 5 mm in 7 of 12 subjects after 20 min. We conclude that measurements over a few breathing cycles should not be used as a proof of accurate reproducibility of motion, not even within the same fraction, if it is longer than a few minutes. Although the diaphragm shows the largest magnitude of motion, it should not be used to assess the gating accuracy over the entire liver because the reproducibility is typically much more limited in inferior parts. Simple gating signals, such as the trajectory of skin motion, can detect the exhalation phase, but do not allow for an absolute localization of the complete liver over longer periods because the drift of these signals does not necessarily correlate with the internal drift

  12. Effect of a limited-enforcement intelligent tutoring system in dermatopathology on student errors, goals and solution paths.

    Science.gov (United States)

    Payne, Velma L; Medvedeva, Olga; Legowski, Elizabeth; Castine, Melissa; Tseytlin, Eugene; Jukic, Drazen; Crowley, Rebecca S

    2009-11-01

    Determine effects of a limited-enforcement intelligent tutoring system in dermatopathology on student errors, goals and solution paths. Determine if limited enforcement in a medical tutoring system inhibits students from learning the optimal and most efficient solution path. Describe the type of deviations from the optimal solution path that occur during tutoring, and how these deviations change over time. Determine if the size of the problem-space (domain scope), has an effect on learning gains when using a tutor with limited enforcement. Analyzed data mined from 44 pathology residents using SlideTutor-a Medical Intelligent Tutoring System in Dermatopathology that teaches histopathologic diagnosis and reporting skills based on commonly used diagnostic algorithms. Two subdomains were included in the study representing sub-algorithms of different sizes and complexities. Effects of the tutoring system on student errors, goal states and solution paths were determined. Students gradually increase the frequency of steps that match the tutoring system's expectation of expert performance. Frequency of errors gradually declines in all categories of error significance. Student performance frequently differs from the tutor-defined optimal path. However, as students continue to be tutored, they approach the optimal solution path. Performance in both subdomains was similar for both errors and goal differences. However, the rate at which students progress toward the optimal solution path differs between the two domains. Tutoring in superficial perivascular dermatitis, the larger and more complex domain was associated with a slower rate of approximation towards the optimal solution path. Students benefit from a limited-enforcement tutoring system that leverages diagnostic algorithms but does not prevent alternative strategies. Even with limited enforcement, students converge toward the optimal solution path.

  13. Limited Effects of Agreement Errors on Word Monitoring in 5-year-olds

    Czech Academy of Sciences Publication Activity Database

    Smolík, Filip

    2011-01-01

    Roč. 2, č. 1 (2011), s. 17-28 ISSN 1804-3240 R&D Projects: GA ČR GAP407/10/2047 Institutional research plan: CEZ:AV0Z70250504 Keywords : language acquisition * morphosyntactic error * word monitoring Subject RIV: AN - Psychology

  14. Retesting the Limits of Data-Driven Learning: Feedback and Error Correction

    Science.gov (United States)

    Crosthwaite, Peter

    2017-01-01

    An increasing number of studies have looked at the value of corpus-based data-driven learning (DDL) for second language (L2) written error correction, with generally positive results. However, a potential conundrum for language teachers involved in the process is how to provide feedback on students' written production for DDL. The study looks at…

  15. Using Analysis Increments (AI) to Estimate and Correct Systematic Errors in the Global Forecast System (GFS) Online

    Science.gov (United States)

    Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.

    2017-12-01

    Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub

  16. What Makes Hydrologic Models Differ? Using SUMMA to Systematically Explore Model Uncertainty and Error

    Science.gov (United States)

    Bennett, A.; Nijssen, B.; Chegwidden, O.; Wood, A.; Clark, M. P.

    2017-12-01

    Model intercomparison experiments have been conducted to quantify the variability introduced during the model development process, but have had limited success in identifying the sources of this model variability. The Structure for Unifying Multiple Modeling Alternatives (SUMMA) has been developed as a framework which defines a general set of conservation equations for mass and energy as well as a common core of numerical solvers along with the ability to set options for choosing between different spatial discretizations and flux parameterizations. SUMMA can be thought of as a framework for implementing meta-models which allows for the investigation of the impacts of decisions made during the model development process. Through this flexibility we develop a hierarchy of definitions which allows for models to be compared to one another. This vocabulary allows us to define the notion of weak equivalence between model instantiations. Through this weak equivalence we develop the concept of model mimicry, which can be used to investigate the introduction of uncertainty and error during the modeling process as well as provide a framework for identifying modeling decisions which may complement or negate one another. We instantiate SUMMA instances that mimic the behaviors of the Variable Infiltration Capacity (VIC) model and the Precipitation Runoff Modeling System (PRMS) by choosing modeling decisions which are implemented in each model. We compare runs from these models and their corresponding mimics across the Columbia River Basin located in the Pacific Northwest of the United States and Canada. From these comparisons, we are able to determine the extent to which model implementation has an effect on the results, as well as determine the changes in sensitivity of parameters due to these implementation differences. By examining these changes in results and sensitivities we can attempt to postulate changes in the modeling decisions which may provide better estimation of

  17. Dosimetric impact of systematic MLC positional errors on step and shoot IMRT for prostate cancer: a planning study

    International Nuclear Information System (INIS)

    Ung, N.M.; Wee, L.; Harper, C.S.

    2010-01-01

    Full text: The positional accuracy of multi leaf collimators (MLC) is crucial in ensuring precise delivery of intensity-modulated radiotherapy (IMRT). The aim of this planning study was to investigate the dosimetric impact of systematic MLC errors on step and shoot IMRT of prostate cancer. Twelve MLC leaf banks perturbations were introduced to six prostate IMRT treatment plans to simulate MLC systematic errors. Dose volume histograms (OYHs) were generated for the extraction of dose endpoint parameters. Plans were evaluated in terms of changes to the defined endpoint dose parameters, conformity index (CI) and healthy tissue avoidance (HTA) to planning target volume (PTY), rectum and bladder. Negative perturbations of MLC had been found to produce greater changes to endpoint dose parameters than positive perturbations of MLC (p < 0.05). Negative and positive synchronized MLC perturbations of I mm resulted in median changes of -2.32 and 1.78%, respectively to 095% of PTY whereas asynchronized MLC perturbations of the same direction and magnitude resulted in median changes of 1.18 and 0.90%, respectively. Doses to rectum were generally more sensitive to systematic MLC errors compared to bladder. Synchronized MLC perturbations of I mm resulted in median changes of endpoint dose parameters to both rectum and bladder from about I to 3%. Maximum reduction of -4.44 and -7.29% were recorded for CI and HTA, respectively, due to synchronized MLC perturbation of I mm. In summary, MLC errors resulted in measurable amount of dose changes to PTY and surrounding critical structures in prostate LMRT. (author)

  18. Evaluation of errors and limits of the 63-μm house-dust-fraction method, a surrogate to predict hidden moisture damage

    Directory of Open Access Journals (Sweden)

    Assadian Ojan

    2009-10-01

    Full Text Available Abstract Background The aim of this study is to analyze possible random and systematic measurement errors and to detect methodological limits of the previously established method. Findings To examine the distribution of random errors (repeatability standard deviation of the detection procedure, collective samples were taken from two uncontaminated rooms using a sampling vacuum cleaner, and 10 sub-samples each were examined with 3 parallel cultivation plates (DG18. In this two collective samples of new dust, the total counts of Aspergillus spp. varied moderately by 25 and 29% (both 9 cfu per plate. At an average of 28 cfu/plate, the total number varied only by 13%. For the evaluation of the influence of old dust, old and fresh dust samples were examined. In both cases with old dust, the old dust influenced the results indicating false positive results, where hidden moist was indicated but was not present. To quantify the influence of sand and sieving, 13 sites were sampled in parallel using the 63-μm- and total dust collection approaches. Sieving to 63-μm resulted in a more then 10-fold enrichment, due to the different quantity of inert sand in each total dust sample. Conclusion The major errors during the quantitative evaluation from house dust samples for mould fungi as reference values for assessment resulted from missing filtration, contamination with old dust and the massive influence of soil. If the assessment is guided by indicator genera, the percentage standard deviation lies in a moderate range.

  19. Systematic errors in the tables of theoretical total internal conversion coefficients

    International Nuclear Information System (INIS)

    Dragoun, O.; Rysavy, M.

    1992-01-01

    Some of the total internal conversion coefficients presented in widely used tables of Rosel et al (1978 Atom. Data Nucl. Data Tables 21, 291) were found to be erroneous. The errors appear for some low transition energies, all multipolarities, and probably for all elements. The origin of the errors is explained. The subshell conversion coefficients of Rosel et al, where available, agree with our calculations. to within a few percent. (author)

  20. Assessing systematic errors in GOSAT CO2 retrievals by comparing assimilated fields to independent CO2 data

    Science.gov (United States)

    Baker, D. F.; Oda, T.; O'Dell, C.; Wunch, D.; Jacobson, A. R.; Yoshida, Y.; Partners, T.

    2012-12-01

    Measurements of column CO2 concentration from space are now being taken at a spatial and temporal density that permits regional CO2 sources and sinks to be estimated. Systematic errors in the satellite retrievals must be minimized for these estimates to be useful, however. CO2 retrievals from the TANSO instrument aboard the GOSAT satellite are compared to similar column retrievals from the Total Carbon Column Observing Network (TCCON) as the primary method of validation; while this is a powerful approach, it can only be done for overflights of 10-20 locations and has not, for example, permitted validation of GOSAT data over the oceans or deserts. Here we present a complementary approach that uses a global atmospheric transport model and flux inversion method to compare different types of CO2 measurements (GOSAT, TCCON, surface in situ, and aircraft) at different locations, at the cost of added transport error. The measurements from any single type of data are used in a variational carbon data assimilation method to optimize surface CO2 fluxes (with a CarbonTracker prior), then the corresponding optimized CO2 concentration fields are compared to those data types not inverted, using the appropriate vertical weighting. With this approach, we find that GOSAT column CO2 retrievals from the ACOS project (version 2.9 and 2.10) contain systematic errors that make the modeled fit to the independent data worse. However, we find that the differences between the GOSAT data and our prior model are correlated with certain physical variables (aerosol amount, surface albedo, correction to total column mass) that are likely driving errors in the retrievals, independent of CO2 concentration. If we correct the GOSAT data using a fit to these variables, then we find the GOSAT data to improve the fit to independent CO2 data, which suggests that the useful information in the measurements outweighs the negative impact of the remaining systematic errors. With this assurance, we compare

  1. Technical Errors May Affect Accuracy of Torque Limiter in Locking Plate Osteosynthesis.

    Science.gov (United States)

    Savin, David D; Lee, Simon; Bohnenkamp, Frank C; Pastor, Andrew; Garapati, Rajeev; Goldberg, Benjamin A

    2016-01-01

    In locking plate osteosynthesis, proper surgical technique is crucial in reducing potential pitfalls, and use of a torque limiter makes it possible to control insertion torque. We conducted a study of the ways in which different techniques can alter the accuracy of torque limiters. We tested 22 torque limiters (1.5 Nm) for accuracy using hand and power tools under different rotational scenarios: hand power at low and high velocity and drill power at low and high velocity. We recorded the maximum torque reached after each torque-limiting event. Use of torque limiters under hand power at low velocity and high velocity resulted in significantly (P torque and subsequent complications. For torque limiters, the most reliable technique involves hand power at slow velocity or drill power with careful control of insertion speed until 1 torque-limiting event occurs.

  2. Calculation of the detection limit in radiation measurements with systematic uncertainties

    International Nuclear Information System (INIS)

    Kirkpatrick, J.M.; Russ, W.; Venkataraman, R.; Young, B.M.

    2015-01-01

    The detection limit (L D ) or Minimum Detectable Activity (MDA) is an a priori evaluation of assay sensitivity intended to quantify the suitability of an instrument or measurement arrangement for the needs of a given application. Traditional approaches as pioneered by Currie rely on Gaussian approximations to yield simple, closed-form solutions, and neglect the effects of systematic uncertainties in the instrument calibration. These approximations are applicable over a wide range of applications, but are of limited use in low-count applications, when high confidence values are required, or when systematic uncertainties are significant. One proposed modification to the Currie formulation attempts account for systematic uncertainties within a Gaussian framework. We have previously shown that this approach results in an approximation formula that works best only for small values of the relative systematic uncertainty, for which the modification of Currie's method is the least necessary, and that it significantly overestimates the detection limit or gives infinite or otherwise non-physical results for larger systematic uncertainties where such a correction would be the most useful. We have developed an alternative approach for calculating detection limits based on realistic statistical modeling of the counting distributions which accurately represents statistical and systematic uncertainties. Instead of a closed form solution, numerical and iterative methods are used to evaluate the result. Accurate detection limits can be obtained by this method for the general case

  3. Dosimetric impact of systematic MLC positional errors on step and shoot IMRT for prostate cancer: a planning study

    International Nuclear Information System (INIS)

    Ung, N.M.; Harper, C.S.; Wee, L.

    2011-01-01

    Full text: The positional accuracy of multileaf collimators (MLC) is crucial in ensuring precise delivery of intensity-modulated radiotherapy (IMRT). The aim of this planning study was to investigate the dosimetric impact of systematic MLC positional errors on step and shoot IMRT of prostate cancer. A total of 12 perturbations of MLC leaf banks were introduced to six prostate IMRT treatment plans to simulate MLC systematic positional errors. Dose volume histograms (DVHs) were generated for the extraction of dose endpoint parameters. Plans were evaluated in terms of changes to the defined endpoint dose parameters, conformity index (CI) and healthy tissue avoidance (HTA) to planning target volume (PTV), rectum and bladder. Negative perturbations of MLC had been found to produce greater changes to endpoint dose parameters than positive perturbations of MLC (p 9 5 of -1.2 and 0.9% respectively. Negative and positive synchronised MLC perturbations of I mm in one direction resulted in median changes in D 9 5 of -2.3 and 1.8% respectively. Doses to rectum were generally more sensitive to systematic MLC en-ors compared to bladder (p < 0.01). Negative and positive synchronised MLC perturbations of I mm in one direction resulted in median changes in endpoint dose parameters of rectum and bladder from 1.0 to 2.5%. Maximum reduction of -4.4 and -7.3% were recorded for conformity index (CI) and healthy tissue avoidance (HT A) respectively due to synchronised MLC perturbation of 1 mm. MLC errors resulted in dosimetric changes in IMRT plans for prostate. (author)

  4. The Curious Anomaly of Skewed Judgment Distributions and Systematic Error in the Wisdom of Crowds

    DEFF Research Database (Denmark)

    Nash, Ulrik William

    2014-01-01

    about true values, when neurons categorize cues better than chance, and when the particular true value is extreme compared to what is typical and anchored upon, then populations of judges form skewed judgment distributions with high probability. Moreover, the collective error made by these people can...... positively with collective error, thereby challenging what is commonly believed about how diversity and collective intelligence relate. Data from 3053 judgment surveys about US macroeconomic variables obtained from the Federal Reserve Bank of Philadelphia and the Wall Street Journal provide strong support...

  5. Global CO2 flux inversions from remote-sensing data with systematic errors using hierarchical statistical models

    Science.gov (United States)

    Zammit-Mangion, Andrew; Stavert, Ann; Rigby, Matthew; Ganesan, Anita; Rayner, Peter; Cressie, Noel

    2017-04-01

    The Orbiting Carbon Observatory-2 (OCO-2) satellite was launched on 2 July 2014, and it has been a source of atmospheric CO2 data since September 2014. The OCO-2 dataset contains a number of variables, but the one of most interest for flux inversion has been the column-averaged dry-air mole fraction (in units of ppm). These global level-2 data offer the possibility of inferring CO2 fluxes at Earth's surface and tracking those fluxes over time. However, as well as having a component of random error, the OCO-2 data have a component of systematic error that is dependent on the instrument's mode, namely land nadir, land glint, and ocean glint. Our statistical approach to CO2-flux inversion starts with constructing a statistical model for the random and systematic errors with parameters that can be estimated from the OCO-2 data and possibly in situ sources from flasks, towers, and the Total Column Carbon Observing Network (TCCON). Dimension reduction of the flux field is achieved through the use of physical basis functions, while temporal evolution of the flux is captured by modelling the basis-function coefficients as a vector autoregressive process. For computational efficiency, flux inversion uses only three months of sensitivities of mole fraction to changes in flux, computed using MOZART; any residual variation is captured through the modelling of a stochastic process that varies smoothly as a function of latitude. The second stage of our statistical approach is to simulate from the posterior distribution of the basis-function coefficients and all unknown parameters given the data using a fully Bayesian Markov chain Monte Carlo (MCMC) algorithm. Estimates and posterior variances of the flux field can then be obtained straightforwardly from this distribution. Our statistical approach is different than others, as it simultaneously makes inference (and quantifies uncertainty) on both the error components' parameters and the CO2 fluxes. We compare it to more classical

  6. Upper limit for Poisson variable incorporating systematic uncertainties by Bayesian approach

    International Nuclear Information System (INIS)

    Zhu, Yongsheng

    2007-01-01

    To calculate the upper limit for the Poisson observable at given confidence level with inclusion of systematic uncertainties in background expectation and signal efficiency, formulations have been established along the line of Bayesian approach. A FORTRAN program, BPULE, has been developed to implement the upper limit calculation

  7. A systematic framework for Monte Carlo simulation of remote sensing errors map in carbon assessments

    Science.gov (United States)

    S. Healey; P. Patterson; S. Urbanski

    2014-01-01

    Remotely sensed observations can provide unique perspective on how management and natural disturbance affect carbon stocks in forests. However, integration of these observations into formal decision support will rely upon improved uncertainty accounting. Monte Carlo (MC) simulations offer a practical, empirical method of accounting for potential remote sensing errors...

  8. A Systematic Approach for Identifying Level-1 Error Covariance Structures in Latent Growth Modeling

    Science.gov (United States)

    Ding, Cherng G.; Jane, Ten-Der; Wu, Chiu-Hui; Lin, Hang-Rung; Shen, Chih-Kang

    2017-01-01

    It has been pointed out in the literature that misspecification of the level-1 error covariance structure in latent growth modeling (LGM) has detrimental impacts on the inferences about growth parameters. Since correct covariance structure is difficult to specify by theory, the identification needs to rely on a specification search, which,…

  9. Systematic analysis of dependent human errors from the maintenance history at finnish NPPs - A status report

    Energy Technology Data Exchange (ETDEWEB)

    Laakso, K. [VTT Industrial Systems (Finland)

    2002-12-01

    Operating experience has shown missed detection events, where faults have passed inspections and functional tests to operating periods after the maintenance activities during the outage. The causes of these failures have often been complex event sequences, involving human and organisational factors. Especially common cause and other dependent failures of safety systems may significantly contribute to the reactor core damage risk. The topic has been addressed in the Finnish studies of human common cause failures, where experiences on latent human errors have been searched and analysed in detail from the maintenance history. The review of the bulk of the analysis results of the Olkiluoto and Loviisa plant sites shows that the instrumentation and control and electrical equipment is more prone to human error caused failure events than the other maintenance and that plant modifications and also predetermined preventive maintenance are significant sources of common cause failures. Most errors stem from the refuelling and maintenance outage period at the both sites, and less than half of the dependent errors were identified during the same outage. The dependent human errors originating from modifications could be reduced by a more tailored specification and coverage of their start-up testing programs. Improvements could also be achieved by a more case specific planning of the installation inspection and functional testing of complicated maintenance works or work objects of higher plant safety and availability importance. A better use and analysis of condition monitoring information for maintenance steering could also help. The feedback from discussions of the analysis results with plant experts and professionals is still crucial in developing the final conclusions and recommendations that meet the specific development needs at the plants. (au)

  10. Estimation of glucose kinetics in fetal-maternal studies: Potential errors, solutions, and limitations

    International Nuclear Information System (INIS)

    Menon, R.K.; Bloch, C.A.; Sperling, M.A.

    1990-01-01

    We investigated whether errors occur in the estimation of ovine maternal-fetal glucose (Glc) kinetics using the isotope dilution technique when the Glc pool is rapidly expanded by exogenous (protocol A) or endogenous (protocol C) Glc entry and sought possible solutions (protocol B). In protocol A (n = 8), after attaining steady-state Glc specific activity (SA) by [U-14C]glucose (period 1), infusion of Glc (period 2) predictably decreased Glc SA, whereas. [U-14C]glucose concentration unexpectedly rose from 7,208 +/- 367 (means +/- SE) in period 1 to 8,558 +/- 308 disintegrations/min (dpm) per ml in period 2 (P less than 0.01). Fetal endogenous Glc production (EGP) was negligible during period 1 (0.44 +/- 1.0), but yielded a physiologically impossible negative value of -2.1 +/- 0.72 mg.kg-1.min-1 during period 2. When the fall in Glc SA during Glc infusion was prevented by addition of [U-14C]glucose admixed with the exogenous Glc (protocol B; n = 7), EGP was no longer negative. In protocol C (n = 6), sequential infusions of four increasing doses of epinephrine serially decreased SA, whereas tracer Glc increased from 7,483 +/- 608 to 11,525 +/- 992 dpm/ml plasma (P less than 0.05), imposing an obligatory underestimation of EGP. Thus a tracer mixing problem leads to erroneous estimations of fetal Glc utilization and Glc production via the three-compartment model in sheep when the Glc pool is expanded exogenously or endogenously. These errors can be minimized by maintaining the Glc SA relatively constant

  11. Modeling Systematic Error Effects for a Sensitive Storage Ring EDM Polarimeter

    Science.gov (United States)

    Stephenson, Edward; Imig, Astrid

    2009-10-01

    The Storage Ring EDM Collaboration has obtained a set of measurements detailing the sensitivity of a storage ring polarimeter for deuterons to small geometrical and rate changes. Various schemes, such as the calculation of the cross ratio [1], can cancel effects due to detector acceptance differences and luminosity differences for states of opposite polarization. Such schemes fail at second-order in the errors, becoming sensitive to geometrical changes, polarization magnitude differences between opposite polarization states, and changes to the detector response with changing data rates. An expansion of the polarimeter response in a Taylor series based on small errors about the polarimeter operating point can parametrize such effects, primarily in terms of the logarithmic derivatives of the cross section and analyzing power. A comparison will be made to measurements obtained with the EDDA detector at COSY-J"ulich. [4pt] [1] G.G. Ohlsen and P.W. Keaton, Jr., NIM 109, 41 (1973).

  12. Galaxy Cluster Shapes and Systematic Errors in the Hubble Constant as Determined by the Sunyaev-Zel'dovich Effect

    Science.gov (United States)

    Sulkanen, Martin E.; Joy, M. K.; Patel, S. K.

    1998-01-01

    Imaging of the Sunyaev-Zei'dovich (S-Z) effect in galaxy clusters combined with the cluster plasma x-ray diagnostics can measure the cosmic distance scale to high accuracy. However, projecting the inverse-Compton scattering and x-ray emission along the cluster line-of-sight will introduce systematic errors in the Hubble constant, H$-O$, because the true shape of the cluster is not known. This effect remains present for clusters that are otherwise chosen to avoid complications for the S-Z and x-ray analysis, such as plasma temperature variations, cluster substructure, or cluster dynamical evolution. In this paper we present a study of the systematic errors in the value of H$-0$, as determined by the x-ray and S-Z properties of a theoretical sample of triaxial isothermal 'beta-model' clusters, caused by projection effects and observer orientation relative to the model clusters' principal axes. The model clusters are not generated as ellipsoids of rotation, but have three independent 'core radii', as well as a random orientation to the plane of the sky.

  13. A systematic review of patient medication error on self-administering medication at home.

    Science.gov (United States)

    Mira, José Joaquín; Lorenzo, Susana; Guilabert, Mercedes; Navarro, Isabel; Pérez-Jover, Virtudes

    2015-06-01

    Medication errors have been analyzed as a health professionals' responsibility (due to mistakes in prescription, preparation or dispensing). However, sometimes, patients themselves (or their caregivers) make mistakes in the administration of the medication. The epidemiology of patient medication errors (PEs) has been scarcely reviewed in spite of its impact on people, on therapeutic effectiveness and on incremental cost for the health systems. This study reviews and describes the methodological approaches and results of published studies on the frequency, causes and consequences of medication errors committed by patients at home. A review of research articles published between 1990 and 2014 was carried out using MEDLINE, Web-of-Knowledge, Scopus, Tripdatabase and Index Medicus. The frequency of PE was situated between 19 and 59%. The elderly and the preschooler population constituted a higher number of mistakes than others. The most common were: incorrect dosage, forgetting, mixing up medications, failing to recall indications and taking out-of-date or inappropriately stored drugs. The majority of these mistakes have no negative consequences. Health literacy, information and communication and complexity of use of dispensing devices were identified as causes of PEs. Apps and other new technologies offer several opportunities for improving drug safety.

  14. Study of systematic errors in the determination of total Hg levels in the range -5% in inorganic and organic matrices with two reliable spectrometrical determination procedures

    International Nuclear Information System (INIS)

    Kaiser, G.; Goetz, D.; Toelg, G.; Max-Planck-Institut fuer Metallforschung, Stuttgart; Knapp, G.; Maichin, B.; Spitzy, H.

    1978-01-01

    In the determiniation of Hg at ng/g and pg/g levels systematic errors are due to faults in the analytical methods such as intake, preparation and decomposition of a sample. The sources of these errors have been studied both with 203 Hg-radiotracer techniques and two multi-stage procedures developed for the determiniation of trace levels. The emission spectrometrie (OES-MIP) procedure includes incineration of the sample in a microwave induced oxygen plasma (MIP), the isolation and enrichment on a gold absorbent and its excitation in an argon plasma (MIP). The emitted Hg-radiation (253,7 nm) is evaluated photometrically with a semiconductor element. The detection limit of the OES-MIP procedure was found to be 0,01 ng, the coefficient of variation 5% for 1 ng Hg. The second procedure combines a semi-automated wet digestion method (HCLO 3 /HNO 3 ) with a reduction-aeration (ascorbic acid/SnCl 2 ), and the flameless atomic absorption technique (253,7 nm). The detection limit of this procedure was found to be 0,5 ng, the coefficient of variation 5% for 5 ng Hg. (orig.) [de

  15. Errors, lies and misunderstandings: Systematic review on behavioural decision making in projects

    DEFF Research Database (Denmark)

    Stingl, Verena; Geraldi, Joana

    2017-01-01

    limitations—errors), pluralist (on political behaviour—lies), and contextualist (on social and organizational sensemaking—misunderstandings). Our review suggests avenues for future research with a wider coverage of theories in cognitive and social psychology and critical and mindful integration of findings...... in projects and beyond. However, the literature is fragmented and draws only on a fraction of the recent, insightful, and relevant developments on behavioural decision making. This paper organizes current research in a conceptual framework rooted in three schools of thinking—reductionist (on cognitive...

  16. The application of SHERPA (Systematic Human Error Reduction and Prediction Approach) in the development of compensatory cognitive rehabilitation strategies for stroke patients with left and right brain damage.

    Science.gov (United States)

    Hughes, Charmayne M L; Baber, Chris; Bienkiewicz, Marta; Worthington, Andrew; Hazell, Alexa; Hermsdörfer, Joachim

    2015-01-01

    Approximately 33% of stroke patients have difficulty performing activities of daily living, often committing errors during the planning and execution of such activities. The objective of this study was to evaluate the ability of the human error identification (HEI) technique SHERPA (Systematic Human Error Reduction and Prediction Approach) to predict errors during the performance of daily activities in stroke patients with left and right hemisphere lesions. Using SHERPA we successfully predicted 36 of the 38 observed errors, with analysis indicating that the proportion of predicted and observed errors was similar for all sub-tasks and severity levels. HEI results were used to develop compensatory cognitive strategies that clinicians could employ to reduce or prevent errors from occurring. This study provides evidence for the reliability and validity of SHERPA in the design of cognitive rehabilitation strategies in stroke populations.

  17. Generalized linear mixed model for binary outcomes when covariates are subject to measurement errors and detection limits.

    Science.gov (United States)

    Xie, Xianhong; Xue, Xiaonan; Strickler, Howard D

    2018-01-15

    Longitudinal measurement of biomarkers is important in determining risk factors for binary endpoints such as infection or disease. However, biomarkers are subject to measurement error, and some are also subject to left-censoring due to a lower limit of detection. Statistical methods to address these issues are few. We herein propose a generalized linear mixed model and estimate the model parameters using the Monte Carlo Newton-Raphson (MCNR) method. Inferences regarding the parameters are made by applying Louis's method and the delta method. Simulation studies were conducted to compare the proposed MCNR method with existing methods including the maximum likelihood (ML) method and the ad hoc approach of replacing the left-censored values with half of the detection limit (HDL). The results showed that the performance of the MCNR method is superior to ML and HDL with respect to the empirical standard error, as well as the coverage probability for the 95% confidence interval. The HDL method uses an incorrect imputation method, and the computation is constrained by the number of quadrature points; while the ML method also suffers from the constrain for the number of quadrature points, the MCNR method does not have this limitation and approximates the likelihood function better than the other methods. The improvement of the MCNR method is further illustrated with real-world data from a longitudinal study of local cervicovaginal HIV viral load and its effects on oncogenic HPV detection in HIV-positive women. Copyright © 2017 John Wiley & Sons, Ltd.

  18. The curious anomaly of skewed judgment distributions and systematic error in the wisdom of crowds.

    Directory of Open Access Journals (Sweden)

    Ulrik W Nash

    Full Text Available Judgment distributions are often skewed and we know little about why. This paper explains the phenomenon of skewed judgment distributions by introducing the augmented quincunx (AQ model of sequential and probabilistic cue categorization by neurons of judges. In the process of developing inferences about true values, when neurons categorize cues better than chance, and when the particular true value is extreme compared to what is typical and anchored upon, then populations of judges form skewed judgment distributions with high probability. Moreover, the collective error made by these people can be inferred from how skewed their judgment distributions are, and in what direction they tilt. This implies not just that judgment distributions are shaped by cues, but that judgment distributions are cues themselves for the wisdom of crowds. The AQ model also predicts that judgment variance correlates positively with collective error, thereby challenging what is commonly believed about how diversity and collective intelligence relate. Data from 3053 judgment surveys about US macroeconomic variables obtained from the Federal Reserve Bank of Philadelphia and the Wall Street Journal provide strong support, and implications are discussed with reference to three central ideas on collective intelligence, these being Galton's conjecture on the distribution of judgments, Muth's rational expectations hypothesis, and Page's diversity prediction theorem.

  19. Instruments used to assess functional limitations in workers applying for disability benefit : a systematic review

    NARCIS (Netherlands)

    Spanjer, Jerry; Groothoff, Johan W.; Brouwer, Sandra

    2011-01-01

    Purpose. To systematically review the quality of the psychometric properties of instruments for assessing functional limitations in workers applying for disability benefit. Method. Electronic searches of Medline, Embase, CINAHL and PsycINFO were performed to identify studies focusing on the

  20. Systematic errors in the determination of the spectroscopic g-factor in broadband ferromagnetic resonance spectroscopy: A proposed solution

    Science.gov (United States)

    Gonzalez-Fuentes, C.; Dumas, R. K.; García, C.

    2018-01-01

    A theoretical and experimental study of the influence of small offsets of the magnetic field (δH) on the measurement accuracy of the spectroscopic g-factor (g) and saturation magnetization (Ms) obtained by broadband ferromagnetic resonance (FMR) measurements is presented. The random nature of δH generates systematic and opposite sign deviations of the values of g and Ms with respect to their true values. A δH on the order of a few Oe leads to a ˜10% error of g and Ms for a typical range of frequencies employed in broadband FMR experiments. We propose a simple experimental methodology to significantly minimize the effect of δH on the fitted values of g and Ms, eliminating their apparent dependence in the range of frequencies employed. Our method was successfully tested using broadband FMR measurements on a 5 nm thick Ni80Fe20 film for frequencies ranging between 3 and 17 GHz.

  1. Range camera on conveyor belts: estimating size distribution and systematic errors due to occlusion

    Science.gov (United States)

    Blomquist, Mats; Wernersson, Ake V.

    1999-11-01

    When range cameras are used for analyzing irregular material on a conveyor belt there will be complications like missing segments caused by occlusion. Also, a number of range discontinuities will be present. In a frame work towards stochastic geometry, conditions are found for the cases when range discontinuities take place. The test objects in this paper are pellets for the steel industry. An illuminating laser plane will give range discontinuities at the edges of each individual object. These discontinuities are used to detect and measure the chord created by the intersection of the laser plane and the object. From the measured chords we derive the average diameter and its variance. An improved method is to use a pair of parallel illuminating light planes to extract two chords. The estimation error for this method is not larger than the natural shape fluctuations (the difference in diameter) for the pellets. The laser- camera optronics is sensitive enough both for material on a conveyor belt and free falling material leaving the conveyor.

  2. Systematical and statistical errors in using reference light sources to calibrate TLD readers

    International Nuclear Information System (INIS)

    Burgkhardt, B.; Piesch, E.

    1981-01-01

    Three light sources, namely an NaI(Tl) scintillator + Ra, an NaI(Tl) scintillator + 14 C and a plastic scintillator + 14 C, were used during a period of 24 months for a daily check of two TLD readers: the Harshaw 2000 A + B and the Toledo 651. On the basis of light source measurements long-term changes and day-to-day fluctuations of the reader response were investigated. Systematical changes of the Toledo reader response of up to 6% during a working week are explained by nitrogen effects in the plastic scintillator light source. It was found that the temperature coefficient of the light source intensity was -0.05%/ 0 C for the plastic scintillator and -0.3%/ 0 C for the NaI(Tl) scintillator. The 210 Pb content in the Ra activated NaI(Tl) scintillator caused a time-dependent decrease in light source intensity of 3%/yr for the light source in the Harshaw reader. The internal light sources revealed a relative standard deviation of 0.5% for the Toledo reader and the Harshaw reader after respective reading times of 0.45 and 100 sec. (author)

  3. Resilience to emotional distress in response to failure, error or mistakes: A systematic review.

    Science.gov (United States)

    Johnson, Judith; Panagioti, Maria; Bass, Jennifer; Ramsey, Lauren; Harrison, Reema

    2017-03-01

    Perceptions of failure have been implicated in a range of psychological disorders, and even a single experience of failure can heighten anxiety and depression. However, not all individuals experience significant emotional distress following failure, indicating the presence of resilience. The current systematic review synthesised studies investigating resilience factors to emotional distress resulting from the experience of failure. For the definition of resilience we used the Bi-Dimensional Framework for resilience research (BDF) which suggests that resilience factors are those which buffer the impact of risk factors, and outlines criteria a variable should meet in order to be considered as conferring resilience. Studies were identified through electronic searches of PsycINFO, MEDLINE, EMBASE and Web of Knowledge. Forty-six relevant studies reported in 38 papers met the inclusion criteria. These provided evidence of the presence of factors which confer resilience to emotional distress in response to failure. The strongest support was found for the factors of higher self-esteem, more positive attributional style, and lower socially-prescribed perfectionism. Weaker evidence was found for the factors of lower trait reappraisal, lower self-oriented perfectionism and higher emotional intelligence. The majority of studies used experimental or longitudinal designs. These results identify specific factors which should be targeted by resilience-building interventions. Resilience; failure; stress; self-esteem; attributional style; perfectionism. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Constituent quarks and systematic errors in mid-rapidity charged multiplicity dNch/dη distributions

    Science.gov (United States)

    Tannenbaum, M. J.

    2018-01-01

    Centrality definition in A + A collisions at colliders such as RHIC and LHC suffers from a correlated systematic uncertainty caused by the efficiency of detecting a p + p collision (50 ± 5% for PHENIX at RHIC). In A + A collisions where centrality is measured by the number of nucleon collisions, Ncoll, or the number of nucleon participants, Npart, or the number of constituent quark participants, Nqp, the error in the efficiency of the primary interaction trigger (Beam-Beam Counters) for a p + p collision leads to a correlated systematic uncertainty in Npart, Ncoll or Nqp which reduces binomially as the A + A collisions become more central. If this is not correctly accounted for in projections of A + A to p + p collisions, then mistaken conclusions can result. A recent example is presented in whether the mid-rapidity charged multiplicity per constituent quark participant (dNch/dη)/Nqp in Au + Au at RHIC was the same as the value in p + p collisions.

  5. ERESYE - a expert system for the evaluation of uncertainties related to systematic experimental errors; ERESYE - un sistema esperto per la valutazione di incertezze correlate ad errori sperimentali sistematici

    Energy Technology Data Exchange (ETDEWEB)

    Martinelli, T; Panini, G C [ENEA - Dipartimento Tecnologie Intersettoriali di Base, Centro Ricerche Energia, Casaccia (Italy); Amoroso, A [Ricercatore Ospite (Italy)

    1989-11-15

    Information about systematic errors are not given In EXFOR, the data base of nuclear experimental measurements: their assessment is committed to the ability of the evaluator. A tool Is needed which performs this task in a fully automatic way or, at least, gives a valuable aid. The expert system ERESYE has been implemented for investigating the feasibility of an automatic evaluation of the systematic errors in the experiments. The features of the project which led to the implementation of the system are presented. (author)

  6. Random error in cardiovascular meta-analyses

    DEFF Research Database (Denmark)

    Albalawi, Zaina; McAlister, Finlay A; Thorlund, Kristian

    2013-01-01

    BACKGROUND: Cochrane reviews are viewed as the gold standard in meta-analyses given their efforts to identify and limit systematic error which could cause spurious conclusions. The potential for random error to cause spurious conclusions in meta-analyses is less well appreciated. METHODS: We exam...

  7. The influence of random and systematic errors on a general definition of minimum detectable amount (MDA) applicable to all radiobioassay measurements

    International Nuclear Information System (INIS)

    Brodsky, A.

    1985-01-01

    An approach to defining minimum detectable amount (MDA) of radioactivity in a sample will be discussed, with the aim of obtaining comments helpful in developing a formulation of MDA that will be broadly applicable to all kinds of radiobioassay measurements, and acceptable to the scientists who make these measurements. Also, the influence of random and systematic errors on the defined MDA are examined

  8. Measuring galaxy cluster masses with CMB lensing using a Maximum Likelihood estimator: statistical and systematic error budgets for future experiments

    Energy Technology Data Exchange (ETDEWEB)

    Raghunathan, Srinivasan; Patil, Sanjaykumar; Bianchini, Federico; Reichardt, Christian L. [School of Physics, University of Melbourne, 313 David Caro building, Swanston St and Tin Alley, Parkville VIC 3010 (Australia); Baxter, Eric J. [Department of Physics and Astronomy, University of Pennsylvania, 209 S. 33rd Street, Philadelphia, PA 19104 (United States); Bleem, Lindsey E. [Argonne National Laboratory, High-Energy Physics Division, 9700 S. Cass Avenue, Argonne, IL 60439 (United States); Crawford, Thomas M. [Kavli Institute for Cosmological Physics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637 (United States); Holder, Gilbert P. [Department of Astronomy and Department of Physics, University of Illinois, 1002 West Green St., Urbana, IL 61801 (United States); Manzotti, Alessandro, E-mail: srinivasan.raghunathan@unimelb.edu.au, E-mail: s.patil2@student.unimelb.edu.au, E-mail: ebax@sas.upenn.edu, E-mail: federico.bianchini@unimelb.edu.au, E-mail: bleeml@uchicago.edu, E-mail: tcrawfor@kicp.uchicago.edu, E-mail: gholder@illinois.edu, E-mail: manzotti@uchicago.edu, E-mail: christian.reichardt@unimelb.edu.au [Department of Astronomy and Astrophysics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637 (United States)

    2017-08-01

    We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, we examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.

  9. Analysis of systematic error deviation of water temperature measurement at the fuel channel outlet of the reactor Maria

    International Nuclear Information System (INIS)

    Bykowski, W.

    2000-01-01

    The reactor Maria has two primary cooling circuits; fuel channels cooling circuit and reactor pool cooling circuit. Fuel elements are placed inside the fuel channels which are parallely linked in parallel, between the collectors. In the course of reactor operation the following measurements are performed: continuous measurement of water temperature at the fuel channels inlet, continuous measurement of water temperature at the outlet of each fuel channel and continuous measurement of water flow rate through each fuel channel. Based on those thermal-hydraulic parameters the instantaneous thermal power generated in each fuel channel is determined and by use of that value the thermal balance and the degree of fuel burnup is assessed. The work contains an analysis concerning estimate of the systematic error of temperature measurement at outlet of each fuel channel and so the erroneous assessment of thermal power extracted in each fuel channel and the burnup degree for the individual fuel element. The results of measurements of separate factors of deviations for the fuel channels are enclosed. (author)

  10. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  11. A correction method for systematic error in (1)H-NMR time-course data validated through stochastic cell culture simulation.

    Science.gov (United States)

    Sokolenko, Stanislav; Aucoin, Marc G

    2015-09-04

    The growing ubiquity of metabolomic techniques has facilitated high frequency time-course data collection for an increasing number of applications. While the concentration trends of individual metabolites can be modeled with common curve fitting techniques, a more accurate representation of the data needs to consider effects that act on more than one metabolite in a given sample. To this end, we present a simple algorithm that uses nonparametric smoothing carried out on all observed metabolites at once to identify and correct systematic error from dilution effects. In addition, we develop a simulation of metabolite concentration time-course trends to supplement available data and explore algorithm performance. Although we focus on nuclear magnetic resonance (NMR) analysis in the context of cell culture, a number of possible extensions are discussed. Realistic metabolic data was successfully simulated using a 4-step process. Starting with a set of metabolite concentration time-courses from a metabolomic experiment, each time-course was classified as either increasing, decreasing, concave, or approximately constant. Trend shapes were simulated from generic functions corresponding to each classification. The resulting shapes were then scaled to simulated compound concentrations. Finally, the scaled trends were perturbed using a combination of random and systematic errors. To detect systematic errors, a nonparametric fit was applied to each trend and percent deviations calculated at every timepoint. Systematic errors could be identified at time-points where the median percent deviation exceeded a threshold value, determined by the choice of smoothing model and the number of observed trends. Regardless of model, increasing the number of observations over a time-course resulted in more accurate error estimates, although the improvement was not particularly large between 10 and 20 samples per trend. The presented algorithm was able to identify systematic errors as small

  12. Registration factors that limit international mobility of people holding physiotherapy qualifications: A systematic review.

    Science.gov (United States)

    Foo, Jonathan S; Storr, Michael; Maloney, Stephen

    2016-06-01

    There is no enforced international standardisation of the physiotherapy profession. Thus, registration is used in many countries to maintain standards of care and to protect the public. However, registration may also limit international workforce mobility. What is known about the professional registration factors that may limit the international mobility of people holding physiotherapy qualifications? Systematic review using an electronic database search and hand searching of the World Confederation for Physical Therapy and International Network of Physiotherapy Regulatory Authorities websites. Analysis was conducted using thematic analysis. 10 articles and eight websites were included from the search strategy. Data is representative of high-income English speaking countries. Four themes emerged regarding limitations to professional mobility: practice context, qualification recognition, verification of fitness to practice, and incidental limitations arising from the registration process. Professional mobility is limited by differences in physiotherapy education programmes, resulting in varying standards of competency. Thus, it is often necessary to verify clinical competencies through assessments, as well as determining professional attributes and ability to apply competencies in a different practice context, as part of the registration process. There has been little evaluation of registration practices, and at present, there is a need to re-evaluate current registration processes to ensure they are efficient and effective, thereby enhancing workforce mobility. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Operator errors

    International Nuclear Information System (INIS)

    Knuefer; Lindauer

    1980-01-01

    Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)

  14. Errors and limits in the determination of plasma electron density by measuring the absolute values of the emitted continuum radiation intensity

    International Nuclear Information System (INIS)

    Bilbao, L.; Bruzzone, H.; Grondona, D.

    1994-01-01

    The reliable determination of a plasma electron structure requires a good knowledge of the errors affecting the employed technique. A technique based on the measurements of the absolute light intensity emitted by travelling plasma structures in plasma focus devices has been used, but it can be easily modified to other geometries and even to stationary plasma structures with time-varying plasma densities. The purpose of this work is to discuss in some detail the errors and limits of this technique. Three separate errors are shown: the minimum size of the density structure that can be resolved, an overall error in the measurements themselves, and an uncertainty in the shape of the density profile. (author)

  15. Systematic Analysis of Video Data from Different Human-Robot Interaction Studies: A Categorisation of Social Signals During Error Situations

    OpenAIRE

    Manuel eGiuliani; Nicole eMirnig; Gerald eStollnberger; Susanne eStadler; Roland eBuchner; Manfred eTscheligi

    2015-01-01

    Human?robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human?robot interaction experiments. For that, we analyzed 201 videos of five human?robot interaction user studies with varying tasks from four independent projects. The analysis shows tha...

  16. Effects of Systematic and Random Errors on the Retrieval of Particle Microphysical Properties from Multiwavelength Lidar Measurements Using Inversion with Regularization

    Science.gov (United States)

    Ramirez, Daniel Perez; Whiteman, David N.; Veselovskii, Igor; Kolgotin, Alexei; Korenskiy, Michael; Alados-Arboledas, Lucas

    2013-01-01

    In this work we study the effects of systematic and random errors on the inversion of multiwavelength (MW) lidar data using the well-known regularization technique to obtain vertically resolved aerosol microphysical properties. The software implementation used here was developed at the Physics Instrumentation Center (PIC) in Troitsk (Russia) in conjunction with the NASA/Goddard Space Flight Center. Its applicability to Raman lidar systems based on backscattering measurements at three wavelengths (355, 532 and 1064 nm) and extinction measurements at two wavelengths (355 and 532 nm) has been demonstrated widely. The systematic error sensitivity is quantified by first determining the retrieved parameters for a given set of optical input data consistent with three different sets of aerosol physical parameters. Then each optical input is perturbed by varying amounts and the inversion is repeated. Using bimodal aerosol size distributions, we find a generally linear dependence of the retrieved errors in the microphysical properties on the induced systematic errors in the optical data. For the retrievals of effective radius, number/surface/volume concentrations and fine-mode radius and volume, we find that these results are not significantly affected by the range of the constraints used in inversions. But significant sensitivity was found to the allowed range of the imaginary part of the particle refractive index. Our results also indicate that there exists an additive property for the deviations induced by the biases present in the individual optical data. This property permits the results here to be used to predict deviations in retrieved parameters when multiple input optical data are biased simultaneously as well as to study the influence of random errors on the retrievals. The above results are applied to questions regarding lidar design, in particular for the spaceborne multiwavelength lidar under consideration for the upcoming ACE mission.

  17. Interpretations of systematic errors in the NCEP Climate Forecast System at lead times of 2, 4, 8, ..., 256 days

    Directory of Open Access Journals (Sweden)

    Siwon Song

    2012-09-01

    Full Text Available The climatology of mean bias errors (relative to 1-day forecasts was examined in a 20-year hindcast set from version 1 of the Climate Forecast System (CFS, for forecast lead times of 2, 4, 8, 16, ... 256 days, verifying in different seasons. Results mostly confirm the simple expectation that atmospheric model biases should be evident at short lead (2–4 days, while soil moisture errors develop over days-weeks and ocean errors emerge over months. A further simplification is also evident: surface temperature bias patterns have nearly fixed geographical structure, growing with different time scales over land and ocean. The geographical pattern has mostly warm and dry biases over land and cool bias over the oceans, with two main exceptions: (1 deficient stratocumulus clouds cause warm biases in eastern subtropical oceans, and (2 high latitude land is too cold in boreal winter. Further study of the east Pacific cold tongue-Intertropical Convergence Zone (ITCZ complex shows a possible interaction between a rapidly-expressed atmospheric model bias (poleward shift of deep convection beginning at day 2 and slow ocean dynamics (erroneously cold upwelling along the equator in leads > 1 month. Further study of the high latitude land cold bias shows that it is a thermal wind balance aspect of the deep polar vortex, not just a near-surface temperature error under the wintertime inversion, suggesting that its development time scale of weeks to months may involve long timescale processes in the atmosphere, not necessarily in the land model. Winter zonal wind errors are small in magnitude, but a refractive index map shows that this can cause modest errors in Rossby wave ducting. Finally, as a counterpoint to our initial expectations about error growth, a case of non-monotonic error growth is shown: velocity potential bias grows with lead on a time scale of weeks, then decays over months. It is hypothesized that compensations between land and ocean errors may

  18. Limited-sampling strategies for anti-infective agents: systematic review.

    Science.gov (United States)

    Sprague, Denise A; Ensom, Mary H H

    2009-09-01

    Area under the concentration-time curve (AUC) is a pharmacokinetic parameter that represents overall exposure to a drug. For selected anti-infective agents, pharmacokinetic-pharmacodynamic parameters, such as AUC/MIC (where MIC is the minimal inhibitory concentration), have been correlated with outcome in a few studies. A limited-sampling strategy may be used to estimate pharmacokinetic parameters such as AUC, without the frequent, costly, and inconvenient blood sampling that would be required to directly calculate the AUC. To discuss, by means of a systematic review, the strengths, limitations, and clinical implications of published studies involving a limited-sampling strategy for anti-infective agents and to propose improvements in methodology for future studies. The PubMed and EMBASE databases were searched using the terms "anti-infective agents", "limited sampling", "optimal sampling", "sparse sampling", "AUC monitoring", "abbreviated AUC", "abbreviated sampling", and "Bayesian". The reference lists of retrieved articles were searched manually. Included studies were classified according to modified criteria from the US Preventive Services Task Force. Twenty studies met the inclusion criteria. Six of the studies (involving didanosine, zidovudine, nevirapine, ciprofloxacin, efavirenz, and nelfinavir) were classified as providing level I evidence, 4 studies (involving vancomycin, didanosine, lamivudine, and lopinavir-ritonavir) provided level II-1 evidence, 2 studies (involving saquinavir and ceftazidime) provided level II-2 evidence, and 8 studies (involving ciprofloxacin, nelfinavir, vancomycin, ceftazidime, ganciclovir, pyrazinamide, meropenem, and alpha interferon) provided level III evidence. All of the studies providing level I evidence used prospectively collected data and proper validation procedures with separate, randomly selected index and validation groups. However, most of the included studies did not provide an adequate description of the methods or

  19. Palliative Oncologic Care Curricula for Providers in Resource-Limited and Underserved Communities: a Systematic Review.

    Science.gov (United States)

    Xu, Melody J; Su, David; Deboer, Rebecca; Garcia, Michael; Tahir, Peggy; Anderson, Wendy; Kinderman, Anne; Braunstein, Steve; Sherertz, Tracy

    2017-12-20

    Familiarity with principles of palliative care, supportive care, and palliative oncological treatment is essential for providers caring for cancer patients, though this may be challenging in global communities where resources are limited. Herein, we describe the scope of literature on palliative oncological care curricula for providers in resource-limited settings. A systematic literature review was conducted using PubMed, Embase, Cochrane Library, Web of Science, Cumulative Index to Nursing and Allied Health Literature, Med Ed Portal databases, and gray literature. All available prospective cohort studies, case reports, and narratives published up to July 2017 were eligible for review. Fourteen articles were identified and referenced palliative care education programs in Argentina, Uganda, Kenya, Australia, Germany, the USA, or multiple countries. The most common teaching strategy was lecture-based, followed by mentorship and experiential learning involving role play and simulation. Education topics included core principles of palliative care, pain and symptom management, and communication skills. Two programs included additional topics specific to the underserved or American Indian/Alaskan Native community. Only one program discussed supportive cancer care, and no program reported educational content on resource-stratified decision-making for palliative oncological treatment. Five programs reported positive participant satisfaction, and three programs described objective metrics of increased educational or research activity. There is scant literature on effective curricula for providers treating cancer patients in resource-limited settings. Emphasizing supportive cancer care and palliative oncologic treatments may help address gaps in education; increased outcome reporting may help define the impact of palliative care curriculum within resource-limited communities.

  20. A systematic review of portable electronic technology for health education in resource-limited settings.

    Science.gov (United States)

    McHenry, Megan S; Fischer, Lydia J; Chun, Yeona; Vreeman, Rachel C

    2017-08-01

    The objective of this study is to conduct a systematic review of the literature of how portable electronic technologies with offline functionality are perceived and used to provide health education in resource-limited settings. Three reviewers evaluated articles and performed a bibliography search to identify studies describing health education delivered by portable electronic device with offline functionality in low- or middle-income countries. Data extracted included: study population; study design and type of analysis; type of technology used; method of use; setting of technology use; impact on caregivers, patients, or overall health outcomes; and reported limitations. Searches yielded 5514 unique titles. Out of 75 critically reviewed full-text articles, 10 met inclusion criteria. Study locations included Botswana, Peru, Kenya, Thailand, Nigeria, India, Ghana, and Tanzania. Topics addressed included: development of healthcare worker training modules, clinical decision support tools, patient education tools, perceptions and usability of portable electronic technology, and comparisons of technologies and/or mobile applications. Studies primarily looked at the assessment of developed educational modules on trainee health knowledge, perceptions and usability of technology, and comparisons of technologies. Overall, studies reported positive results for portable electronic device-based health education, frequently reporting increased provider/patient knowledge, improved patient outcomes in both quality of care and management, increased provider comfort level with technology, and an environment characterized by increased levels of technology-based, informal learning situations. Negative assessments included high investment costs, lack of technical support, and fear of device theft. While the research is limited, portable electronic educational resources present promising avenues to increase access to effective health education in resource-limited settings, contingent

  1. SU-D-BRA-03: Analysis of Systematic Errors with 2D/3D Image Registration for Target Localization and Treatment Delivery in Stereotactic Radiosurgery

    International Nuclear Information System (INIS)

    Xu, H; Chetty, I; Wen, N

    2016-01-01

    Purpose: Determine systematic deviations between 2D/3D and 3D/3D image registrations with six degrees of freedom (6DOF) for various imaging modalities and registration algorithms on the Varian Edge Linac. Methods: The 6DOF systematic errors were assessed by comparing automated 2D/3D (kV/MV vs. CT) with 3D/3D (CBCT vs. CT) image registrations from different imaging pairs, CT slice thicknesses, couch angles, similarity measures, etc., using a Rando head and a pelvic phantom. The 2D/3D image registration accuracy was evaluated at different treatment sites (intra-cranial and extra-cranial) by statistically analyzing 2D/3D pre-treatment verification against 3D/3D localization of 192 Stereotactic Radiosurgery/Stereotactic Body Radiation Therapy treatment fractions for 88 patients. Results: The systematic errors of 2D/3D image registration using kV-kV, MV-kV and MV-MV image pairs using 0.8 mm slice thickness CT images were within 0.3 mm and 0.3° for translations and rotations with a 95% confidence interval (CI). No significant difference between 2D/3D and 3D/3D image registrations (P>0.05) was observed for target localization at various CT slice thicknesses ranging from 0.8 to 3 mm. Couch angles (30, 45, 60 degree) did not impact the accuracy of 2D/3D image registration. Using pattern intensity with content image filtering was recommended for 2D/3D image registration to achieve the best accuracy. For the patient study, translational error was within 2 mm and rotational error was within 0.6 degrees in terms of 95% CI for 2D/3D image registration. For intra-cranial sites, means and std. deviations of translational errors were −0.2±0.7, 0.04±0.5, 0.1±0.4 mm for LNG, LAT, VRT directions, respectively. For extra-cranial sites, means and std. deviations of translational errors were - 0.04±1, 0.2±1, 0.1±1 mm for LNG, LAT, VRT directions, respectively. 2D/3D image registration uncertainties for intra-cranial and extra-cranial sites were comparable. Conclusion: The Varian

  2. SU-D-BRA-03: Analysis of Systematic Errors with 2D/3D Image Registration for Target Localization and Treatment Delivery in Stereotactic Radiosurgery

    Energy Technology Data Exchange (ETDEWEB)

    Xu, H [Wayne State University, Detroit, MI (United States); Chetty, I; Wen, N [Henry Ford Health System, Detroit, MI (United States)

    2016-06-15

    Purpose: Determine systematic deviations between 2D/3D and 3D/3D image registrations with six degrees of freedom (6DOF) for various imaging modalities and registration algorithms on the Varian Edge Linac. Methods: The 6DOF systematic errors were assessed by comparing automated 2D/3D (kV/MV vs. CT) with 3D/3D (CBCT vs. CT) image registrations from different imaging pairs, CT slice thicknesses, couch angles, similarity measures, etc., using a Rando head and a pelvic phantom. The 2D/3D image registration accuracy was evaluated at different treatment sites (intra-cranial and extra-cranial) by statistically analyzing 2D/3D pre-treatment verification against 3D/3D localization of 192 Stereotactic Radiosurgery/Stereotactic Body Radiation Therapy treatment fractions for 88 patients. Results: The systematic errors of 2D/3D image registration using kV-kV, MV-kV and MV-MV image pairs using 0.8 mm slice thickness CT images were within 0.3 mm and 0.3° for translations and rotations with a 95% confidence interval (CI). No significant difference between 2D/3D and 3D/3D image registrations (P>0.05) was observed for target localization at various CT slice thicknesses ranging from 0.8 to 3 mm. Couch angles (30, 45, 60 degree) did not impact the accuracy of 2D/3D image registration. Using pattern intensity with content image filtering was recommended for 2D/3D image registration to achieve the best accuracy. For the patient study, translational error was within 2 mm and rotational error was within 0.6 degrees in terms of 95% CI for 2D/3D image registration. For intra-cranial sites, means and std. deviations of translational errors were −0.2±0.7, 0.04±0.5, 0.1±0.4 mm for LNG, LAT, VRT directions, respectively. For extra-cranial sites, means and std. deviations of translational errors were - 0.04±1, 0.2±1, 0.1±1 mm for LNG, LAT, VRT directions, respectively. 2D/3D image registration uncertainties for intra-cranial and extra-cranial sites were comparable. Conclusion: The Varian

  3. Hamiltonian formulation of quantum error correction and correlated noise: Effects of syndrome extraction in the long-time limit

    Science.gov (United States)

    Novais, E.; Mucciolo, Eduardo R.; Baranger, Harold U.

    2008-07-01

    We analyze the long-time behavior of a quantum computer running a quantum error correction (QEC) code in the presence of a correlated environment. Starting from a Hamiltonian formulation of realistic noise models, and assuming that QEC is indeed possible, we find formal expressions for the probability of a given syndrome history and the associated residual decoherence encoded in the reduced density matrix. Systems with nonzero gate times (“long gates”) are included in our analysis by using an upper bound on the noise. In order to introduce the local error probability for a qubit, we assume that propagation of signals through the environment is slower than the QEC period (hypercube assumption). This allows an explicit calculation in the case of a generalized spin-boson model and a quantum frustration model. The key result is a dimensional criterion: If the correlations decay sufficiently fast, the system evolves toward a stochastic error model for which the threshold theorem of fault-tolerant quantum computation has been proven. On the other hand, if the correlations decay slowly, the traditional proof of this threshold theorem does not hold. This dimensional criterion bears many similarities to criteria that occur in the theory of quantum phase transitions.

  4. Preventing statistical errors in scientific journals.

    NARCIS (Netherlands)

    Nuijten, M.B.

    2016-01-01

    There is evidence for a high prevalence of statistical reporting errors in psychology and other scientific fields. These errors display a systematic preference for statistically significant results, distorting the scientific literature. There are several possible causes for this systematic error

  5. A systematic review of the efficacy and limitations of venous intervention in stasis ulceration.

    Science.gov (United States)

    Montminy, Myriam L; Jayaraj, Arjun; Raju, Seshadri

    2018-05-01

    Surgical techniques to address various components of chronic venous disease are rapidly evolving. Their efficacy and generally good results in treating superficial venous reflux (SVR) have been documented and compared in patients presenting with pain and swelling. A growing amount of literature is now available suggesting their efficacy in patients with venous leg ulcer (VLU). This review attempts to summarize the efficacy and limitations of commonly used venous interventions in the treatment of SVR and incompetent perforator veins (IPVs) in patients with VLU. A systematic review of the published literature was performed. Two different searches were conducted in MEDLINE, Embase, and EBSCOhost to identify studies that examined the efficacy of SVR ablation and IPV ablation on healing rate and recurrence rate of VLU. In the whole review, 1940 articles were screened. Of those, 45 were included in the SVR ablation review and 4 in the IPV ablation review. Data were too heterogeneous to perform an adequate meta-analysis. The quality of evidence assessed by the Grading of Recommendations Assessment, Development, and Evaluation for the two outcomes varied from very low to moderate. Ulcer healing rate and recurrence rate were between 70% and 100% and 0% and 49% in the SVR ablation review and between 59% and 93% and 4% and 33% in the IPV ablation review, respectively. To explain those variable results, limitations such as inadequate diagnostic techniques, saphenous size, concomitant calf pump dysfunction, and associated deep venous reflux are discussed. Currently available minimally invasive techniques correct most venous pathologic processes in chronic venous disease with a good sustainable healing rate. There are still specific diagnostic and efficacy limitations that mandate proper match of individual patients with the planned approach. Copyright © 2017 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  6. A New Approach to Detection of Systematic Errors in Secondary Substation Monitoring Equipment Based on Short Term Load Forecasting

    Directory of Open Access Journals (Sweden)

    Javier Moriano

    2016-01-01

    Full Text Available In recent years, Secondary Substations (SSs are being provided with equipment that allows their full management. This is particularly useful not only for monitoring and planning purposes but also for detecting erroneous measurements, which could negatively affect the performance of the SS. On the other hand, load forecasting is extremely important since they help electricity companies to make crucial decisions regarding purchasing and generating electric power, load switching, and infrastructure development. In this regard, Short Term Load Forecasting (STLF allows the electric power load to be predicted over an interval ranging from one hour to one week. However, important issues concerning error detection by employing STLF has not been specifically addressed until now. This paper proposes a novel STLF-based approach to the detection of gain and offset errors introduced by the measurement equipment. The implemented system has been tested against real power load data provided by electricity suppliers. Different gain and offset error levels are successfully detected.

  7. Systematic analysis of video data from different human–robot interaction studies: a categorization of social signals during error situations

    Science.gov (United States)

    Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred

    2015-01-01

    Human–robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human–robot interaction experiments. For that, we analyzed 201 videos of five human–robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human–robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies. PMID:26217266

  8. Systematic analysis of video data from different human-robot interaction studies: a categorization of social signals during error situations.

    Science.gov (United States)

    Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred

    2015-01-01

    Human-robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human-robot interaction experiments. For that, we analyzed 201 videos of five human-robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human-robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies.

  9. Determination of fission products and actinides by inductively coupled plasma-mass spectrometry using isotope dilution analysis. A study of random and systematic errors

    International Nuclear Information System (INIS)

    Ignacio Garcia Alonso, Jose

    1995-01-01

    The theory of the propagation of errors (random and systematic) for isotope dilution analysis (IDA) has been applied to the analysis of fission products and actinide elements by inductively coupled plasma-mass spectrometry (ICP-MS). Systematic errors in ID-ICP-MS arising from mass-discrimination (mass bias), detector non-linearity and isobaric interferences in the measured isotopes have to be corrected for in order to achieve accurate results. The mass bias factor and the detector dead-time can be determined by using natural elements with well-defined isotope abundances. A combined method for the simultaneous determination of both factors is proposed. On the other hand, isobaric interferences for some fission products and actinides cannot be eliminated using mathematical corrections (due to the unknown isotope abundances in the sample) and a chemical separation is necessary. The theory for random error propagation in IDA has been applied to the determination of non-natural elements by ICP-MS taking into account all possible sources of uncertainty with pulse counting detection. For the analysis of fission products, the selection of the right spike isotope composition and spike to sample ratio can be performed by applying conventional random propagation theory. However, it has been observed that, in the experimental determination of the isotope abundances of the fission product elements to be determined, the correction for mass-discrimination and the correction for detector dead-time losses contribute to the total random uncertainty. For the instrument used in the experimental part of this study, it was found that the random uncertainty on the measured isotope ratios followed Poisson statistics for low counting rates whereas, for high counting rates, source instability was the main source of error

  10. Random and systematic errors in case–control studies calculating the injury risk of driving under the influence of psychoactive substances

    DEFF Research Database (Denmark)

    Houwing, Sjoerd; Hagenzieker, Marjan; Mathijssen, René P.M.

    2013-01-01

    Between 2006 and 2010, six population based case-control studies were conducted as part of the European research-project DRUID (DRiving Under the Influence of Drugs, alcohol and medicines). The aim of these case-control studies was to calculate odds ratios indicating the relative risk of serious....... The list of indicators that was identified in this study is useful both as guidance for systematic reviews and meta-analyses and for future epidemiological studies in the field of driving under the influence to minimize sources of errors already at the start of the study. © 2013 Published by Elsevier Ltd....

  11. Potential and Limitations of Cochrane Reviews in Pediatric Cardiology: A Systematic Analysis.

    Science.gov (United States)

    Poryo, Martin; Khosrawikatoli, Sara; Abdul-Khaliq, Hashim; Meyer, Sascha

    2017-04-01

    Evidence-based medicine has contributed substantially to the quality of medical care in pediatric and adult cardiology. However, our impression from the bedside is that a substantial number of Cochrane reviews generate inconclusive data that are of limited clinical benefit. We performed a systematic synopsis of Cochrane reviews published between 2001 and 2015 in the field of pediatric cardiology. Main outcome parameters were the number and percentage of conclusive, partly conclusive, and inconclusive reviews as well as their recommendations and their development over three a priori defined intervals. In total, 69 reviews were analyzed. Most of them examined preterm and term neonates (36.2%), whereas 33.3% included also non-pediatric patients. Leading topics were pharmacological issues (71.0%) followed by interventional (10.1%) and operative procedures (2.9%). The majority of reviews were inconclusive (42.9%), while 36.2% were conclusive and 21.7% partly conclusive. Although the number of published reviews increased during the three a priori defined time intervals, reviews with "no specific recommendations" remained stable while "recommendations in favor of an intervention" clearly increased. Main reasons for missing recommendations were insufficient data (n = 41) as well as an insufficient number of trials (n = 22) or poor study quality (n = 19). There is still need for high-quality research, which will likely yield a greater number of Cochrane reviews with conclusive results.

  12. Chances and Limitations of Video Games in the Fight against Childhood Obesity-A Systematic Review.

    Science.gov (United States)

    Mack, Isabelle; Bayer, Carolin; Schäffeler, Norbert; Reiband, Nadine; Brölz, Ellen; Zurstiege, Guido; Fernandez-Aranda, Fernando; Gawrilow, Caterina; Zipfel, Stephan

    2017-07-01

    A systematic literature search was conducted to assess the chances and limitations of video games to combat and prevent childhood obesity. This search included studies with video or computer games targeting nutrition, physical activity and obesity for children between 7 and 15 years of age. The study distinguished between games that aimed to (i) improve knowledge about nutrition, eating habits and exercise; (ii) increase physical activity; or (iii) combine both approaches. Overall, the games were well accepted. On a qualitative level, most studies reported positive effects on obesity-related outcomes (improvement of weight-related parameters, physical activity or dietary behaviour/knowledge). However, the observed effects were small. The games did not address psychosocial aspects. Using video games for weight management exclusively does not deliver satisfying results. Video games as an additional guided component of prevention and treatment programs have the potential to increase compliance and thus enhance treatment outcome. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association.

  13. Psychometric properties of self-reported questionnaires for the evaluation of symptoms and functional limitations in individuals with rotator cuff disorders: a systematic review.

    Science.gov (United States)

    St-Pierre, Corinne; Desmeules, François; Dionne, Clermont E; Frémont, Pierre; MacDermid, Joy C; Roy, Jean-Sébastien

    2016-01-01

    To conduct a systematic review of the psychometric properties (reliability, validity and responsiveness) of self-report questionnaires used to assess symptoms and functional limitations of individuals with rotator cuff (RC) disorders. A systematic search in three databases (Cinahl, Medline and Embase) was conducted. Data extraction and critical methodological appraisal were performed independently by three raters using structured tools, and agreement was achieved by consensus. A descriptive synthesis was performed. One-hundred and twenty articles reporting on 11 questionnaires were included. All questionnaires were highly reliable and responsive to change, and showed construct validity; seven questionnaires also shown known-group validity. The minimal detectable change ranged from 6.4% to 20.8% of total score; only two questionnaires (American Shoulder and Elbow Surgeon questionnaire [ASES] and Upper Limb Functional Index [ULFI]) had a measurement error below 10% of global score. Minimal clinically important differences were established for eight questionnaires, and ranged from 8% to 20% of total score. Overall, included questionnaires showed acceptable psychometric properties for individuals with RC disorders. The ASES and ULFI have the smallest absolute error of measurement, while the Western Ontario RC Index is one of the most responsive questionnaires for individuals suffering from RC disorders. All included questionnaires are reliable, valid and responsive for the evaluation of individuals with RC disorders. As all included questionnaires showed good psychometric properties for the targeted population, the choice should be made according to the purpose of the evaluation and to the construct being evaluated by the questionnaire. The WORC, a RC-specific questionnaire, appeared to be more responsive. It should therefore be used to evaluate change in time. If the evaluation is time-limited, shorter questionnaires or short versions should be considered (such as

  14. Toward a Framework for Systematic Error Modeling of NASA Spaceborne Radar with NOAA/NSSL Ground Radar-Based National Mosaic QPE

    Science.gov (United States)

    Kirstettier, Pierre-Emmanual; Honh, Y.; Gourley, J. J.; Chen, S.; Flamig, Z.; Zhang, J.; Howard, K.; Schwaller, M.; Petersen, W.; Amitai, E.

    2011-01-01

    Characterization of the error associated to satellite rainfall estimates is a necessary component of deterministic and probabilistic frameworks involving space-born passive and active microwave measurement") for applications ranging from water budget studies to forecasting natural hazards related to extreme rainfall events. We focus here on the error structure of NASA's Tropical Rainfall Measurement Mission (TRMM) Precipitation Radar (PR) quantitative precipitation estimation (QPE) at ground. The problem is addressed by comparison of PR QPEs with reference values derived from ground-based measurements using NOAA/NSSL ground radar-based National Mosaic and QPE system (NMQ/Q2). A preliminary investigation of this subject has been carried out at the PR estimation scale (instantaneous and 5 km) using a three-month data sample in the southern part of US. The primary contribution of this study is the presentation of the detailed steps required to derive trustworthy reference rainfall dataset from Q2 at the PR pixel resolution. It relics on a bias correction and a radar quality index, both of which provide a basis to filter out the less trustworthy Q2 values. Several aspects of PR errors arc revealed and quantified including sensitivity to the processing steps with the reference rainfall, comparisons of rainfall detectability and rainfall rate distributions, spatial representativeness of error, and separation of systematic biases and random errors. The methodology and framework developed herein applies more generally to rainfall rate estimates from other sensors onboard low-earth orbiting satellites such as microwave imagers and dual-wavelength radars such as with the Global Precipitation Measurement (GPM) mission.

  15. Slotted rotatable target assembly and systematic error analysis for a search for long range spin dependent interactions from exotic vector boson exchange using neutron spin rotation

    Science.gov (United States)

    Haddock, C.; Crawford, B.; Fox, W.; Francis, I.; Holley, A.; Magers, S.; Sarsour, M.; Snow, W. M.; Vanderwerp, J.

    2018-03-01

    We discuss the design and construction of a novel target array of nonmagnetic test masses used in a neutron polarimetry measurement made in search for new possible exotic spin dependent neutron-atominteractions of Nature at sub-mm length scales. This target was designed to accept and efficiently transmit a transversely polarized slow neutron beam through a series of long open parallel slots bounded by flat rectangular plates. These openings possessed equal atom density gradients normal to the slots from the flat test masses with dimensions optimized to achieve maximum sensitivity to an exotic spin-dependent interaction from vector boson exchanges with ranges in the mm - μm regime. The parallel slots were oriented differently in four quadrants that can be rotated about the neutron beam axis in discrete 90°increments using a Geneva drive. The spin rotation signals from the 4 quadrants were measured using a segmented neutron ion chamber to suppress possible systematic errors from stray magnetic fields in the target region. We discuss the per-neutron sensitivity of the target to the exotic interaction, the design constraints, the potential sources of systematic errors which could be present in this design, and our estimate of the achievable sensitivity using this method.

  16. Limited evidence of abnormal intra-colonic pressure profiles in diverticular disease - a systematic review.

    Science.gov (United States)

    Jaung, R; Robertson, J; O'Grady, G; Milne, T; Rowbotham, D; Bissett, I P

    2017-06-01

    Abnormal colonic pressure profiles and high intraluminal pressures are postulated to contribute to the formation of sigmoid colon diverticulosis and the pathophysiology of diverticular disease. This study aimed to review evidence for abnormal colonic pressure profiles in diverticulosis. All published studies investigating colonic pressure in patients with diverticulosis were searched in three databases (Medline, Embase, Scopus). No language restrictions were applied. Any manometry studies in which patients with diverticulosis were compared with controls were included. The Newcastle-Ottawa Quality Assessment Scale (NOS) for case-control studies was used as a measure of risk of bias. A cut-off of five or more points on the NOS (fair quality in terms of risk of bias) was chosen for inclusion in the meta-analysis. Ten studies (published 1962-2005) met the inclusion criteria. The studies followed a wide variety of protocols and all used low-resolution manometry (sensor spacing range 7.5-15 cm). Six studies compared intra-sigmoid pressure, with five of six showing higher pressure in diverticulosis vs controls, but only two reached statistical significance. A meta-analysis was not performed as only two studies were above the cut-off and these did not have comparable outcomes. This systematic review of manometry data shows that evidence for abnormal pressure in the sigmoid colon in patients with diverticulosis is weak. Existing studies utilized inconsistent methodology, showed heterogeneous results and are of limited quality. Higher quality studies using modern manometric techniques and standardized reporting methods are needed to clarify the role of colonic pressure in diverticulosis. Colorectal Disease © 2017 The Association of Coloproctology of Great Britain and Ireland.

  17. Towards a systematic assessment of errors in diffusion Monte Carlo calculations of semiconductors: Case study of zinc selenide and zinc oxide

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Jaehyung [Department of Mechanical Science and Engineering, 1206 W Green Street, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States); Wagner, Lucas K. [Department of Physics, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States); Ertekin, Elif, E-mail: ertekin@illinois.edu [Department of Mechanical Science and Engineering, 1206 W Green Street, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States); International Institute for Carbon Neutral Energy Research - WPI-I" 2CNER, Kyushu University, 744 Moto-oka, Nishi-ku, Fukuoka 819-0395 (Japan)

    2015-12-14

    The fixed node diffusion Monte Carlo (DMC) method has attracted interest in recent years as a way to calculate properties of solid materials with high accuracy. However, the framework for the calculation of properties such as total energies, atomization energies, and excited state energies is not yet fully established. Several outstanding questions remain as to the effect of pseudopotentials, the magnitude of the fixed node error, and the size of supercell finite size effects. Here, we consider in detail the semiconductors ZnSe and ZnO and carry out systematic studies to assess the magnitude of the energy differences arising from controlled and uncontrolled approximations in DMC. The former include time step errors and supercell finite size effects for ground and optically excited states, and the latter include pseudopotentials, the pseudopotential localization approximation, and the fixed node approximation. We find that for these compounds, the errors can be controlled to good precision using modern computational resources and that quantum Monte Carlo calculations using Dirac-Fock pseudopotentials can offer good estimates of both cohesive energy and the gap of these systems. We do however observe differences in calculated optical gaps that arise when different pseudopotentials are used.

  18. Does the GPM mission improve the systematic error component in satellite rainfall estimates over TRMM? An evaluation at a pan-India scale

    Science.gov (United States)

    Beria, Harsh; Nanda, Trushnamayee; Singh Bisht, Deepak; Chatterjee, Chandranath

    2017-12-01

    The last couple of decades have seen the outburst of a number of satellite-based precipitation products with Tropical Rainfall Measuring Mission (TRMM) as the most widely used for hydrologic applications. Transition of TRMM into the Global Precipitation Measurement (GPM) promises enhanced spatio-temporal resolution along with upgrades to sensors and rainfall estimation techniques. The dependence of systematic error components in rainfall estimates of the Integrated Multi-satellitE Retrievals for GPM (IMERG), and their variation with climatology and topography, was evaluated over 86 basins in India for year 2014 and compared with the corresponding (2014) and retrospective (1998-2013) TRMM estimates. IMERG outperformed TRMM for all rainfall intensities across a majority of Indian basins, with significant improvement in low rainfall estimates showing smaller negative biases in 75 out of 86 basins. Low rainfall estimates in TRMM showed a systematic dependence on basin climatology, with significant overprediction in semi-arid basins, which gradually improved in the higher rainfall basins. Medium and high rainfall estimates of TRMM exhibited a strong dependence on basin topography, with declining skill in higher elevation basins. The systematic dependence of error components on basin climatology and topography was reduced in IMERG, especially in terms of topography. Rainfall-runoff modeling using the Variable Infiltration Capacity (VIC) model over two flood-prone basins (Mahanadi and Wainganga) revealed that improvement in rainfall estimates in IMERG did not translate into improvement in runoff simulations. More studies are required over basins in different hydroclimatic zones to evaluate the hydrologic significance of IMERG.

  19. Does the GPM mission improve the systematic error component in satellite rainfall estimates over TRMM? An evaluation at a pan-India scale

    Directory of Open Access Journals (Sweden)

    H. Beria

    2017-12-01

    Full Text Available The last couple of decades have seen the outburst of a number of satellite-based precipitation products with Tropical Rainfall Measuring Mission (TRMM as the most widely used for hydrologic applications. Transition of TRMM into the Global Precipitation Measurement (GPM promises enhanced spatio-temporal resolution along with upgrades to sensors and rainfall estimation techniques. The dependence of systematic error components in rainfall estimates of the Integrated Multi-satellitE Retrievals for GPM (IMERG, and their variation with climatology and topography, was evaluated over 86 basins in India for year 2014 and compared with the corresponding (2014 and retrospective (1998–2013 TRMM estimates. IMERG outperformed TRMM for all rainfall intensities across a majority of Indian basins, with significant improvement in low rainfall estimates showing smaller negative biases in 75 out of 86 basins. Low rainfall estimates in TRMM showed a systematic dependence on basin climatology, with significant overprediction in semi-arid basins, which gradually improved in the higher rainfall basins. Medium and high rainfall estimates of TRMM exhibited a strong dependence on basin topography, with declining skill in higher elevation basins. The systematic dependence of error components on basin climatology and topography was reduced in IMERG, especially in terms of topography. Rainfall-runoff modeling using the Variable Infiltration Capacity (VIC model over two flood-prone basins (Mahanadi and Wainganga revealed that improvement in rainfall estimates in IMERG did not translate into improvement in runoff simulations. More studies are required over basins in different hydroclimatic zones to evaluate the hydrologic significance of IMERG.

  20. Limited Evidence on the Management of Respiratory Tract Infections in Down's Syndrome : A Systematic Review

    NARCIS (Netherlands)

    Manikam, Logan; Reed, Kate; Venekamp, Roderick P; Hayward, Andrew; Littlejohns, Peter; Schilder, Anne; Lakhanpaul, Monica

    2016-01-01

    AIMS: To systematically review the effectiveness of preventative and therapeutic interventions for respiratory tract infections (RTIs) in people with Down's syndrome. METHODS: Databases were searched for any published and ongoing studies of respiratory tract diseases in children and adults with

  1. Validation of the calculation of the renal impulse response function. An analysis of errors and systematic biases

    International Nuclear Information System (INIS)

    Erbsman, F.; Ham, H.; Piepsz, A.; Struyven, J.

    1978-01-01

    The renal impulse response function (Renal IRF) is the time-activity curve measured over one kidney after injection of a radiopharmaceutical in the renal artery. If the tracer is injected intravenously it is possible to compute the renal IRF by deconvoluting the kidney curve by a blood curve. In previous work we demonstrated that the computed IRF is in good agreement with measurements made after injection in the renal artery. The goal of the present work is the analysis of the effect of sampling errors and the influence of extra-renal activity. The sampling error is only important for the first point of the plasma curve and yields an ill-conditioned function P -1 . The addition of 50 computed renal IRF's demonstrated that the three first points show a larger variability due to incomplete mixing of the tracer. These points should thus not be included in the smoothing process. Subtraction of non-renal activity does not modify appreciably the shape of the renal IRF. The mean transit time and the time to half value are almost independent of non-renal activity and seem to be the parameters of choice

  2. Benzodiazepine Use During Hospitalization: Automated Identification of Potential Medication Errors and Systematic Assessment of Preventable Adverse Events.

    Directory of Open Access Journals (Sweden)

    David Franklin Niedrig

    Full Text Available Benzodiazepines and "Z-drug" GABA-receptor modulators (BDZ are among the most frequently used drugs in hospitals. Adverse drug events (ADE associated with BDZ can be the result of preventable medication errors (ME related to dosing, drug interactions and comorbidities. The present study evaluated inpatient use of BDZ and related ME and ADE.We conducted an observational study within a pharmacoepidemiological database derived from the clinical information system of a tertiary care hospital. We developed algorithms that identified dosing errors and interacting comedication for all administered BDZ. Associated ADE and risk factors were validated in medical records.Among 53,081 patients contributing 495,813 patient-days BDZ were administered to 25,626 patients (48.3% on 115,150 patient-days (23.2%. We identified 3,372 patient-days (2.9% with comedication that inhibits BDZ metabolism, and 1,197 (1.0% with lorazepam administration in severe renal impairment. After validation we classified 134, 56, 12, and 3 cases involving lorazepam, zolpidem, midazolam and triazolam, respectively, as clinically relevant ME. Among those there were 23 cases with associated adverse drug events, including severe CNS-depression, falls with subsequent injuries and severe dyspnea. Causality for BDZ was formally assessed as 'possible' or 'probable' in 20 of those cases. Four cases with ME and associated severe ADE required administration of the BDZ antagonist flumazenil.BDZ use was remarkably high in the studied setting, frequently involved potential ME related to dosing, co-medication and comorbidities, and rarely cases with associated ADE. We propose the implementation of automated ME screening and validation for the prevention of BDZ-related ADE.

  3. Systematic review of ERP and fMRI studies investigating inhibitory control and error processing in people with substance dependence and behavioural addictions

    Science.gov (United States)

    Luijten, Maartje; Machielsen, Marise W.J.; Veltman, Dick J.; Hester, Robert; de Haan, Lieuwe; Franken, Ingmar H.A.

    2014-01-01

    Background Several current theories emphasize the role of cognitive control in addiction. The present review evaluates neural deficits in the domains of inhibitory control and error processing in individuals with substance dependence and in those showing excessive addiction-like behaviours. The combined evaluation of event-related potential (ERP) and functional magnetic resonance imaging (fMRI) findings in the present review offers unique information on neural deficits in addicted individuals. Methods We selected 19 ERP and 22 fMRI studies using stop-signal, go/no-go or Flanker paradigms based on a search of PubMed and Embase. Results The most consistent findings in addicted individuals relative to healthy controls were lower N2, error-related negativity and error positivity amplitudes as well as hypoactivation in the anterior cingulate cortex (ACC), inferior frontal gyrus and dorsolateral prefrontal cortex. These neural deficits, however, were not always associated with impaired task performance. With regard to behavioural addictions, some evidence has been found for similar neural deficits; however, studies are scarce and results are not yet conclusive. Differences among the major classes of substances of abuse were identified and involve stronger neural responses to errors in individuals with alcohol dependence versus weaker neural responses to errors in other substance-dependent populations. Limitations Task design and analysis techniques vary across studies, thereby reducing comparability among studies and the potential of clinical use of these measures. Conclusion Current addiction theories were supported by identifying consistent abnormalities in prefrontal brain function in individuals with addiction. An integrative model is proposed, suggesting that neural deficits in the dorsal ACC may constitute a hallmark neurocognitive deficit underlying addictive behaviours, such as loss of control. PMID:24359877

  4. Systematic review of the limited evidence for different surgical techniques at benign hysterectomy

    DEFF Research Database (Denmark)

    Sloth, Sigurd Beier; Schroll, Jeppe Bennekou; Settnes, Annette

    2017-01-01

    guideline on the subject based on a systematic review of the literature. A guideline panel of seven gynecologists formulated the clinical questions for the guideline. A search specialist performed the comprehensive literature search. The guideline panel reviewed the literature and rated the quality...

  5. The limited prosocial effects of meditation: A systematic review and meta-analysis

    NARCIS (Netherlands)

    Kreplin, U.; Farias, M.; Brazil, I.A.

    2018-01-01

    Many individuals believe that meditation has the capacity to not only alleviate mental-illness but to improve prosociality. This article systematically reviewed and meta-analysed the effects of meditation interventions on prosociality in randomized controlled trials of healthy adults. Five types of

  6. Prescription Errors in Psychiatry

    African Journals Online (AJOL)

    Arun Kumar Agnihotri

    clinical pharmacists in detecting errors before they have a (sometimes serious) clinical impact should not be underestimated. Research on medication error in mental health care is limited. .... participation in ward rounds and adverse drug.

  7. Evaluation of measurement properties of self-administered PROMs aimed at patients with non-specific shoulder pain and "activity limitations": a systematic review.

    Science.gov (United States)

    Thoomes-de Graaf, M; Scholten-Peeters, G G M; Schellingerhout, J M; Bourne, A M; Buchbinder, R; Koehorst, M; Terwee, C B; Verhagen, A P

    2016-09-01

    To critically appraise and compare the measurement properties of self-administered patient-reported outcome measures (PROMs) focussing on the shoulder, assessing "activity limitations." Systematic review. The study population had to consist of patients with shoulder pain. We excluded postoperative patients or patients with generic diseases. The methodological quality of the selected studies and the results of the measurement properties were critically appraised and rated using the COSMIN checklist. Out of a total of 3427 unique hits, 31 articles, evaluating 7 different questionnaires, were included. The SPADI is the most frequently evaluated PROM and its measurement properties seem adequate apart from a lack of information regarding its measurement error and content validity. For English, Norwegian and Turkish users, we recommend to use the SPADI. Dutch users could use either the SDQ or the SST. In German, we recommend the DASH. In Tamil, Slovene, Spanish and the Danish languages, the evaluated PROMs were not yet of acceptable validity. None of these PROMs showed strong positive evidence for all measurement properties. We propose to develop a new shoulder PROM focused on activity limitations, taking new knowledge and techniques into account.

  8. Limited evidence for the use of imaging to detect prostate cancer: A systematic review

    International Nuclear Information System (INIS)

    Blomqvist, L.; Carlsson, S.; Gjertsson, P.; Heintz, E.; Hultcrantz, M.; Mejare, I.; Andrén, O.

    2014-01-01

    Highlights: • In men with clinical suspicion of prostate cancer, ultrasound guided systematic biopsies is the golden standard for diagnosis. • Diagnostic imaging techniques, especially magnetic resonance imaging, is being used in trials to aid detection of prostate cancer. • To date, there is insufficient scientific evidence for the use of imaging techniques to detect prostate cancer. - Abstract: Objective: To assess the diagnostic accuracy of imaging technologies for detecting prostate cancer in patients with elevated PSA-values or suspected findings on clinical examination. Methods: The databases Medline, EMBASE, Cochrane, CRD HTA/DARE/NHS EED and EconLit were searched until June 2013. Pre-determined inclusion criteria were used to select full text articles. Risk of bias in individual studies was rated according to QUADAS or AMSTAR. Abstracts and full text articles were assessed independently by two reviewers. The performance of diagnostic imaging was compared with systematic biopsies (reference standard) and sensitivity and specificity were calculated. Results: The literature search yielded 5141 abstracts, which were reviewed by two independent reviewers. Of these 4852 were excluded since they did not meet the inclusion criteria. 288 articles were reviewed in full text for quality assessment. Six studies, three using MRI and three using transrectal ultrasound were included. All were rated as high risk of bias. Relevant studies on PET/CT were not identified. Conclusion: Despite clinical use, there is insufficient evidence regarding the accuracy of imaging technologies for detecting cancer in patients with suspected prostate cancer using TRUS guided systematic biopsies as reference standard

  9. Limited evidence for the use of imaging to detect prostate cancer: A systematic review

    Energy Technology Data Exchange (ETDEWEB)

    Blomqvist, L., E-mail: lennart.k.blomqvist@ki.se [Department of Diagnostic Radiology, Karolinska University Hospital, Solna (Sweden); Department of Molecular Medicine and Surgery, Karolinska Institutet, Stockholm (Sweden); Carlsson, S. [Department of Molecular Medicine and Surgery, Karolinska Institutet, Stockholm (Sweden); Department of Urology, Karolinska University Hospital, Solna (Sweden); Gjertsson, P. [Department of Clinical Physiology, Sahlgrenska University Hospital, Gothenburg (Sweden); Heintz, E.; Hultcrantz, M.; Mejare, I. [The Swedish Council on Health Technology Assessment, Stockholm (Sweden); Andrén, O. [School of Health and Medical Sciences, Örebro University, Örebro (Sweden); Department of Urology, Örebro University Hospital, Örebro (Sweden)

    2014-09-15

    Highlights: • In men with clinical suspicion of prostate cancer, ultrasound guided systematic biopsies is the golden standard for diagnosis. • Diagnostic imaging techniques, especially magnetic resonance imaging, is being used in trials to aid detection of prostate cancer. • To date, there is insufficient scientific evidence for the use of imaging techniques to detect prostate cancer. - Abstract: Objective: To assess the diagnostic accuracy of imaging technologies for detecting prostate cancer in patients with elevated PSA-values or suspected findings on clinical examination. Methods: The databases Medline, EMBASE, Cochrane, CRD HTA/DARE/NHS EED and EconLit were searched until June 2013. Pre-determined inclusion criteria were used to select full text articles. Risk of bias in individual studies was rated according to QUADAS or AMSTAR. Abstracts and full text articles were assessed independently by two reviewers. The performance of diagnostic imaging was compared with systematic biopsies (reference standard) and sensitivity and specificity were calculated. Results: The literature search yielded 5141 abstracts, which were reviewed by two independent reviewers. Of these 4852 were excluded since they did not meet the inclusion criteria. 288 articles were reviewed in full text for quality assessment. Six studies, three using MRI and three using transrectal ultrasound were included. All were rated as high risk of bias. Relevant studies on PET/CT were not identified. Conclusion: Despite clinical use, there is insufficient evidence regarding the accuracy of imaging technologies for detecting cancer in patients with suspected prostate cancer using TRUS guided systematic biopsies as reference standard.

  10. Random and systematic sampling error when hooking fish to monitor skin fluke (Benedenia seriolae) and gill fluke (Zeuxapta seriolae) burden in Australian farmed yellowtail kingfish (Seriola lalandi).

    Science.gov (United States)

    Fensham, J R; Bubner, E; D'Antignana, T; Landos, M; Caraguel, C G B

    2018-05-01

    The Australian farmed yellowtail kingfish (Seriola lalandi, YTK) industry monitor skin fluke (Benedenia seriolae) and gill fluke (Zeuxapta seriolae) burden by pooling the fluke count of 10 hooked YTK. The random and systematic error of this sampling strategy was evaluated to assess potential impact on treatment decisions. Fluke abundance (fluke count per fish) in a study cage (estimated 30,502 fish) was assessed five times using the current sampling protocol and its repeatability was estimated the repeatability coefficient (CR) and the coefficient of variation (CV). Individual body weight, fork length, fluke abundance, prevalence, intensity (fluke count per infested fish) and density (fluke count per Kg of fish) were compared between 100 hooked and 100 seined YTK (assumed representative of the entire population) to estimate potential selection bias. Depending on the fluke species and age category, CR (expected difference in parasite count between 2 sampling iterations) ranged from 0.78 to 114 flukes per fish. Capturing YTK by hooking increased the selection of fish of a weight and length in the lowest 5th percentile of the cage (RR = 5.75, 95% CI: 2.06-16.03, P-value = 0.0001). These lower end YTK had on average an extra 31 juveniles and 6 adults Z. seriolae per Kg of fish and an extra 3 juvenile and 0.4 adult B. seriolae per Kg of fish, compared to the rest of the cage population (P-value sampling towards the smallest and most heavily infested fish in the population, resulting in poor repeatability (more variability amongst sampled fish) and an overestimation of parasite burden in the population. In this particular commercial situation these finding supported that health management program, where the finding of an underestimation of parasite burden could provide a production impact on the study population. In instances where fish populations and parasite burdens are more homogenous, sampling error may be less severe. Sampling error when capturing fish

  11. Downlink Error Rates of Half-duplex Users in Full-duplex Networks over a Laplacian Inter-User Interference Limited and EGK fading

    KAUST Repository

    Soury, Hamza

    2017-03-14

    This paper develops a mathematical framework to study downlink error rates and throughput for half-duplex (HD) terminals served by a full-duplex (FD) base station (BS). The developed model is used to motivate long term pairing for users that have non-line of sight (NLOS) interfering link. Consequently, we study the interferer limited problem that appears between NLOS HD users-pair that are scheduled on the same FD channel. The distribution of the interference is first characterized via its distribution function, which is derived in closed form. Then, a comprehensive performance assessment for the proposed pairing scheme is provided by assuming Extended Generalized- $cal{K}$ (EGK) fading for the downlink and studying different modulation schemes. To this end, a unified closed form expression for the average symbol error rate is derived. Furthermore, we show the effective downlink throughput gain harvested by the pairing NLOS users as a function of the average signal-to-interferenceratio when compared to an idealized HD scenario with neither interference nor noise. Finally, we show the minimum required channel gain pairing threshold to harvest downlink throughput via the FD operation when compared to the HD case for each modulation scheme.

  12. Why Are Older People Often So Responsible and Considerate Even When Their Future Seems Limited? A Systematic Review.

    Science.gov (United States)

    Moss, Simon A; Wilson, Samuel G

    2018-01-01

    Socioemotional selectivity theory assumes that older individuals tend to perceive their identity or life as limited in time and, therefore, prioritize meaningful relationships. Yet, other research shows that people who perceive their identity as limited in time tend to behave impulsively-contrary to the behavior of many older individuals. To redress this paradox, this article reports a systematic review, comprising 86 papers, that examined the consequences of whether individuals perceive their identity as limited or enduring. To reconcile conflicts in the literature, we propose that, before an impending transition, some individuals perceive their life now as dissociated from their future goals and, therefore, will tend to behave impulsively. Other individuals however, especially if older, tend to pursue a quest or motivation that transcends this transition, fostering delayed gratification, and responsible behavior.

  13. A systematic review and synthesis of the strengths and limitations of measuring malaria mortality through verbal autopsy.

    Science.gov (United States)

    Herrera, Samantha; Enuameh, Yeetey; Adjei, George; Ae-Ngibise, Kenneth Ayuurebobi; Asante, Kwaku Poku; Sankoh, Osman; Owusu-Agyei, Seth; Yé, Yazoume

    2017-10-23

    Lack of valid and reliable data on malaria deaths continues to be a problem that plagues the global health community. To address this gap, the verbal autopsy (VA) method was developed to ascertain cause of death at the population level. Despite the adoption and wide use of VA, there are many recognized limitations of VA tools and methods, especially for measuring malaria mortality. This study synthesizes the strengths and limitations of existing VA tools and methods for measuring malaria mortality (MM) in low- and middle-income countries through a systematic literature review. The authors searched PubMed, Cochrane Library, Popline, WHOLIS, Google Scholar, and INDEPTH Network Health and Demographic Surveillance System sites' websites from 1 January 1990 to 15 January 2016 for articles and reports on MM measurement through VA. article presented results from a VA study where malaria was a cause of death; article discussed limitations/challenges related to measurement of MM through VA. Two authors independently searched the databases and websites and conducted a synthesis of articles using a standard matrix. The authors identified 828 publications; 88 were included in the final review. Most publications were VA studies; others were systematic reviews discussing VA tools or methods; editorials or commentaries; and studies using VA data to develop MM estimates. The main limitation were low sensitivity and specificity of VA tools for measuring MM. Other limitations included lack of standardized VA tools and methods, lack of a 'true' gold standard to assess accuracy of VA malaria mortality. Existing VA tools and methods for measuring MM have limitations. Given the need for data to measure progress toward the World Health Organization's Global Technical Strategy for Malaria 2016-2030 goals, the malaria community should define strategies for improving MM estimates, including exploring whether VA tools and methods could be further improved. Longer term strategies should focus

  14. Criminal systematic and limits of Proposals Functionalists ( Weightings About Warranties , Citizenship and Human Rights

    Directory of Open Access Journals (Sweden)

    Felipe Augusto Forte de Negreiros Deodat

    2016-06-01

    Full Text Available Make a critique of functionalism means looking at the history of the construction of the penal systems. It is observed that the rigor of analysis is something that is imposed when we have a system as a tool work. It is essential for that what now arises in legal and criminal terms sees as the study of criminal law should be increasingly precise and also closer to the idea of human dignity. It will also be built a criticism for the two doctrines that have changed the face of the first systematic, designed in the nineteenth, which will allow us to see more accurately what can, or even should, be changed. One cannot help but praise the normativism, especially what received the indelible strengthening of the cultivators of criminal science of the past half century.

  15. Assessment of activity limitations and participation restrictions with persons with chronic fatigue syndrome: a systematic review.

    Science.gov (United States)

    Vergauwen, Kuni; Huijnen, Ivan P J; Kos, Daphne; Van de Velde, Dominique; van Eupen, Inge; Meeus, Mira

    2015-01-01

    To summarize measurement instruments used to evaluate activity limitations and participation restrictions in patients with chronic fatigue syndrome (CFS) and review the psychometric properties of these instruments. General information of all included measurement instruments was extracted. The methodological quality was evaluated using the COSMIN checklist. Results of the measurement properties were rated based on the quality criteria of Terwee et al. Finally, overall quality was defined per psychometric property and measurement instrument by use of the quality criteria by Schellingerhout et al. A total of 68 articles were identified of which eight evaluated the psychometric properties of a measurement instrument assessing activity limitations and participation restrictions. One disease-specific and 37 generic measurement instruments were found. Limited evidence was found for the psychometric properties and clinical usability of these instruments. However, the CFS-activities and participation questionnaire (APQ) is a disease-specific instrument with moderate content and construct validity. The psychometric properties of the reviewed measurement instruments to evaluate activity limitations and participation restrictions are not sufficiently evaluated. Future research is needed to evaluate the psychometric properties of the measurement instruments, including the other properties of the CFS-APQ. If it is necessary to use a measurement instrument, the CFS-APQ is recommended. Chronic fatigue syndrome (CFS). Chronic fatigue syndrome causes activity limitations and participation restrictions in one or more areas of life. Standardized, reliable and valid measurement instruments are necessary to identify these limitations and restrictions. Currently, no measurement instrument is sufficiently evaluated with persons with CFS. If a measurement instrument is needed to identify activity limitations and participation restrictions with persons with CFS, it is recommended to use

  16. Colorado River sediment transport: 2. Systematic bed‐elevation and grain‐size effects of sand supply limitation

    Science.gov (United States)

    Topping, David J.; Rubin, David M.; Nelson, Jonathan M.; Kinzel, Paul J.; Corson, Ingrid C.

    2000-01-01

    The Colorado River in Marble and Grand Canyons displays evidence of annual supply limitation with respect to sand both prior to [Topping et al, this issue] and after the closure of Glen Canyon Dam in 1963. Systematic changes in bed elevation and systematic coupled changes in suspended‐sand concentration and grain size result from this supply limitation. During floods, sand supply limitation either causes or modifies a lag between the time of maximum discharge and the time of either maximum or minimum (depending on reach geometry) bed elevation. If, at a cross section where the bed aggrades with increasing flow, the maximum bed elevation is observed to lead the peak or the receding limb of a flood, then this observed response of the bed is due to sand supply limitation. Sand supply limitation also leads to the systematic evolution of sand grain size (both on the bed and in suspension) in the Colorado River. Sand input during a tributary flood travels down the Colorado River as an elongating sediment wave, with the finest sizes (because of their lower settling velocities) traveling the fastest. As the fine front of a sediment wave arrives at a given location, the bed fines and suspended‐sand concentrations increase in response to the enhanced upstream supply of finer sand. Then, as the front of the sediment wave passes that location, the bed is winnowed and suspended‐sand concentrations decrease in response to the depletion of the upstream supply of finer sand. The grain‐size effects of depletion of the upstream sand supply are most obvious during periods of higher dam releases (e.g., the 1996 flood experiment and the 1997 test flow). Because of substantial changes in the grain‐size distribution of the bed, stable relationships between the discharge of water and sand‐transport rates (i.e., stable sand rating curves) are precluded. Sand budgets in a supply‐limited river like the Colorado River can only be constructed through inclusion of the physical

  17. Analysis and reduction of 3D systematic and random setup errors during the simulation and treatment of lung cancer patients with CT-based external beam radiotherapy dose planning.

    NARCIS (Netherlands)

    Boer, H.D. de; Sornsen de Koste, J.R. van; Senan, S.; Visser, A.G.; Heijmen, B.J.M.

    2001-01-01

    PURPOSE: To determine the magnitude of the errors made in (a) the setup of patients with lung cancer on the simulator relative to their intended setup with respect to the planned treatment beams and (b) in the setup of these patients on the treatment unit. To investigate how the systematic component

  18. Human errors and mistakes

    International Nuclear Information System (INIS)

    Wahlstroem, B.

    1993-01-01

    Human errors have a major contribution to the risks for industrial accidents. Accidents have provided important lesson making it possible to build safer systems. In avoiding human errors it is necessary to adapt the systems to their operators. The complexity of modern industrial systems is however increasing the danger of system accidents. Models of the human operator have been proposed, but the models are not able to give accurate predictions of human performance. Human errors can never be eliminated, but their frequency can be decreased by systematic efforts. The paper gives a brief summary of research in human error and it concludes with suggestions for further work. (orig.)

  19. Itô-SDE MCMC method for Bayesian characterization of errors associated with data limitations in stochastic expansion methods for uncertainty quantification

    Science.gov (United States)

    Arnst, M.; Abello Álvarez, B.; Ponthot, J.-P.; Boman, R.

    2017-11-01

    This paper is concerned with the characterization and the propagation of errors associated with data limitations in polynomial-chaos-based stochastic methods for uncertainty quantification. Such an issue can arise in uncertainty quantification when only a limited amount of data is available. When the available information does not suffice to accurately determine the probability distributions that must be assigned to the uncertain variables, the Bayesian method for assigning these probability distributions becomes attractive because it allows the stochastic model to account explicitly for insufficiency of the available information. In previous work, such applications of the Bayesian method had already been implemented by using the Metropolis-Hastings and Gibbs Markov Chain Monte Carlo (MCMC) methods. In this paper, we present an alternative implementation, which uses an alternative MCMC method built around an Itô stochastic differential equation (SDE) that is ergodic for the Bayesian posterior. We draw together from the mathematics literature a number of formal properties of this Itô SDE that lend support to its use in the implementation of the Bayesian method, and we describe its discretization, including the choice of the free parameters, by using the implicit Euler method. We demonstrate the proposed methodology on a problem of uncertainty quantification in a complex nonlinear engineering application relevant to metal forming.

  20. A Low-Cost Environmental Monitoring System: How to Prevent Systematic Errors in the Design Phase through the Combined Use of Additive Manufacturing and Thermographic Techniques

    Directory of Open Access Journals (Sweden)

    Francesco Salamone

    2017-04-01

    Full Text Available nEMoS (nano Environmental Monitoring System is a 3D-printed device built following the Do-It-Yourself (DIY approach. It can be connected to the web and it can be used to assess indoor environmental quality (IEQ. It is built using some low-cost sensors connected to an Arduino microcontroller board. The device is assembled in a small-sized case and both thermohygrometric sensors used to measure the air temperature and relative humidity, and the globe thermometer used to measure the radiant temperature, can be subject to thermal effects due to overheating of some nearby components. A thermographic analysis was made to rule out this possibility. The paper shows how the pervasive technique of additive manufacturing can be combined with the more traditional thermographic techniques to redesign the case and to verify the accuracy of the optimized system in order to prevent instrumental systematic errors in terms of the difference between experimental and actual values of the above-mentioned environmental parameters.

  1. Evaluation of Stability of Complexes of Inner Transition Metal Ions with 2-Oxo-1-pyrrolidine Acetamide and Role of Systematic Errors

    Directory of Open Access Journals (Sweden)

    Sangita Sharma

    2011-01-01

    Full Text Available BEST FIT models were used to study the complexation of inner transition metal ions like Y(III, La(III, Ce(III, Pr(III, Nd(III, Sm(III, Gd(III, Dy(III and Th(IV with 2-oxo-1-pyrrolidine acetamide at 30 °C in 10%, 20, 30, 40, 50% and 60% v/v dioxane-water mixture at 0.2 M ionic strength. Irving Rossotti titration method was used to get titration data. Calculations were carried out with PKAS and BEST Fortran IV computer programs. The expected species like L, LH+, ML, ML2 and ML(OH3, were obtained with SPEPLOT. Stability of complexes has increased with increasing the dioxane content. The observed change in stability can be explained on the basis of electrostatic effects, non electrostatic effects, solvating power of solvent mixture, interaction between ions and interaction of ions with solvents. Effect of systematic errors like effect of dissolved carbon dioxide, concentration of alkali, concentration of acid, concentration of ligand and concentration of metal have also been explained here.

  2. A Low-Cost Environmental Monitoring System: How to Prevent Systematic Errors in the Design Phase through the Combined Use of Additive Manufacturing and Thermographic Techniques.

    Science.gov (United States)

    Salamone, Francesco; Danza, Ludovico; Meroni, Italo; Pollastro, Maria Cristina

    2017-04-11

    nEMoS (nano Environmental Monitoring System) is a 3D-printed device built following the Do-It-Yourself (DIY) approach. It can be connected to the web and it can be used to assess indoor environmental quality (IEQ). It is built using some low-cost sensors connected to an Arduino microcontroller board. The device is assembled in a small-sized case and both thermohygrometric sensors used to measure the air temperature and relative humidity, and the globe thermometer used to measure the radiant temperature, can be subject to thermal effects due to overheating of some nearby components. A thermographic analysis was made to rule out this possibility. The paper shows how the pervasive technique of additive manufacturing can be combined with the more traditional thermographic techniques to redesign the case and to verify the accuracy of the optimized system in order to prevent instrumental systematic errors in terms of the difference between experimental and actual values of the above-mentioned environmental parameters.

  3. Resistance training for activity limitations in older adults with skeletal muscle function deficits: a systematic review

    Directory of Open Access Journals (Sweden)

    Papa EV

    2017-06-01

    Full Text Available Evan V Papa,1 Xiaoyang Dong,2 Mahdi Hassan1 1Department of Rehabilitation Medicine, The First Affiliated Hospital of Nanchang University, Nanchang, Jiangxi Province, People’s Republic of China; 2Department of Physical Therapy, University of North Texas Health Science Center, Fort Worth, TX, USA Abstract: Human aging results in a variety of changes to skeletal muscle. Sarcopenia is the age-associated loss of muscle mass and is one of the main contributors to musculoskeletal impairments in the elderly. Previous research has demonstrated that resistance training can attenuate skeletal muscle function deficits in older adults, however few articles have focused on the effects of resistance training on functional mobility. The purpose of this systematic review was to 1 present the current state of literature regarding the effects of resistance training on functional mobility outcomes for older adults with skeletal muscle function deficits and 2 provide clinicians with practical guidelines that can be used with seniors during resistance training, or to encourage exercise. We set forth evidence that resistance training can attenuate age-related changes in functional mobility, including improvements in gait speed, static and dynamic balance, and fall risk reduction. Older adults should be encouraged to participate in progressive resistance training activities, and should be admonished to move along a continuum of exercise from immobility, toward the recommended daily amounts of activity. Keywords: aging, strength training, sarcopenia, mobility, balance

  4. Systematic investigation of NLTE phenomena in the limit of small departures from LTE

    Science.gov (United States)

    Libby, S. B.; Graziani, F. R.; More, R. M.; Kato, T.

    1997-04-01

    In this paper, we begin a systematic study of Non-Local Thermal Equilibrium (NLTE) phenomena in near equilibrium (LTE) high energy density, highly radiative plasmas. It is shown that the principle of minimum entropy production rate characterizes NLTE steady states for average atom rate equations in the case of small departures form LTE. With the aid of a novel hohlraum-reaction box thought experiment, we use the principles of minimum entropy production and detailed balance to derive Onsager reciprocity relations for the NLTE responses of a near equilibrium sample to non-Planckian perturbations in different frequency groups. This result is a significant symmetry constraint on the linear corrections to Kirchoff's law. We envisage applying our strategy to a number of test problems which include: the NLTE corrections to the ionization state of an ion located near the edge of an otherwise LTE medium; the effect of a monochromatic radiation field perturbation on an LTE medium; the deviation of Rydberg state populations from LTE in recombining or ionizing plasmas; multi-electron temperature models such as that of Busquet; and finally, the effect of NLTE population shifts on opacity models.

  5. Systematic investigation of NLTE phenomena in the limit of small departures from LTE

    International Nuclear Information System (INIS)

    Libby, S.B.; Graziani, F.R.; More, R.M.; Kato, T.

    1997-01-01

    In this paper, we begin a systematic study of Non-Local Thermal Equilibrium (NLTE) phenomena in near equilibrium (LTE) high energy density, highly radiative plasmas. It is shown that the principle of minimum entropy production rate characterizes NLTE steady states for average atom rate equations in the case of small departures form LTE. With the aid of a novel hohlraum-reaction box thought experiment, we use the principles of minimum entropy production and detailed balance to derive Onsager reciprocity relations for the NLTE responses of a near equilibrium sample to non-Planckian perturbations in different frequency groups. This result is a significant symmetry constraint on the linear corrections to Kirchoff close-quote s law. We envisage applying our strategy to a number of test problems which include: the NLTE corrections to the ionization state of an ion located near the edge of an otherwise LTE medium; the effect of a monochromatic radiation field perturbation on an LTE medium; the deviation of Rydberg state populations from LTE in recombining or ionizing plasmas; multi-electron temperature models such as that of Busquet; and finally, the effect of NLTE population shifts on opacity models. copyright 1997 American Institute of Physics

  6. Systematic investigation of NLTE phenomena in the limit of small departures from LTE

    International Nuclear Information System (INIS)

    Libby, S. B.; Graziani, F. R.; More, R. M.; Kato, T.

    1997-01-01

    In this paper, we begin a systematic study of Non-Local Thermal Equilibrium (NLTE) phenomena in near equilibrium (LTE) high energy density, highly radiative plasmas. It is shown that the principle of minimum entropy production rate characterizes NLTE steady states for average atom rate equations in the case of small departures form LTE. With the aid of a novel hohlraum-reaction box thought experiment, we use the principles of minimum entropy production and detailed balance to derive Onsager reciprocity relations for the NLTE responses of a near equilibrium sample to non-Planckian perturbations in different frequency groups. This result is a significant symmetry constraint on the linear corrections to Kirchoff's law. We envisage applying our strategy to a number of test problems which include: the NLTE corrections to the ionization state of an ion located near the edge of an otherwise LTE medium; the effect of a monochromatic radiation field perturbation on an LTE medium; the deviation of Rydberg state populations from LTE in recombining or ionizing plasmas; multi-electron temperature models such as that of Busquet; and finally, the effect of NLTE population shifts on opacity models

  7. Neuropathic pain screening questionnaires have limited measurement properties. A systematic review.

    Science.gov (United States)

    Mathieson, Stephanie; Maher, Christopher G; Terwee, Caroline B; Folly de Campos, Tarcisio; Lin, Chung-Wei Christine

    2015-08-01

    The Douleur Neuropathique 4 (DN4), ID Pain, Leeds Assessment of Neuropathic Symptoms and Signs (LANSS), PainDETECT, and Neuropathic Pain Questionnaire have been recommended as screening questionnaires for neuropathic pain. This systematic review aimed to evaluate the measurement properties (eg, criterion validity and reliability) of these questionnaires. Online database searches were conducted and two independent reviewers screened studies and extracted data. Methodological quality of included studies and the measurement properties were assessed against established criteria. A modified Grading of Recommendations Assessment, Development and Evaluation approach was used to summarize the level of evidence. Thirty-seven studies were included. Most studies recruited participants from pain clinics. The original version of the DN4 (French) and Neuropathic Pain Questionnaire (English) had the most number of satisfactory measurement properties. The ID Pain (English) demonstrated satisfactory hypothesis testing and reliability, but all other properties tested were unsatisfactory. The LANSS (English) was unsatisfactory for all properties, except specificity. The PainDETECT (English) demonstrated satisfactory hypothesis testing and criterion validity. In general, the cross-cultural adaptations had less evidence than the original versions. Overall, the DN4 and Neuropathic Pain Questionnaire were most suitable for clinical use. These screening questionnaires should not replace a thorough clinical assessment. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  8. Systematic classical continuum limits of integrable spin chains and emerging novel dualities

    International Nuclear Information System (INIS)

    Avan, Jean; Doikou, Anastasia; Sfetsos, Konstadinos

    2010-01-01

    We examine certain classical continuum long wave-length limits of prototype integrable quantum spin chains. We define the corresponding construction of classical continuum Lax operators. Our discussion starts with the XXX chain, the anisotropic Heisenberg model and their generalizations and extends to the generic isotropic and anisotropic gl n magnets. Certain classical and quantum integrable models emerging from special 'dualities' of quantum spin chains, parametrized by c-number matrices, are also presented.

  9. Systematic review of power mobility outcomes for infants, children and adolescents with mobility limitations.

    Science.gov (United States)

    Livingstone, Roslyn; Field, Debra

    2014-10-01

    To summarize and critically appraise the evidence related to power mobility use in children (18 years or younger) with mobility limitations. Searches were performed in 12 electronic databases along with hand searching for articles published in English to September 2012 and updated February 2014. The search was restricted to quantitative studies including at least one child with a mobility limitation and measuring an outcome related to power mobility device use. Articles were appraised using American Academy of Cerebral Palsy and Developmental Medicine (AACPDM) criteria for group and single-subject designs. The PRISMA statement was followed with inclusion criteria set a priori. Two reviewers independently screened titles, abstracts and full-text articles. AACPDM quality ratings were completed for levels I-III studies. Of 259 titles, 29 articles met inclusion criteria, describing 28 primary research studies. One study, rated as strong level II evidence, supported positive impact of power mobility on overall development as well as independent mobility. Another study, rated as moderate level III evidence, supported positive impact on self-initiated movement. Remaining studies, rated evidence levels IV and V, provided support for a positive impact on a broad range of outcomes from to International Classification of Functioning (ICF) components of body structure and function, activity and participation. Some studies suggest that environmental factors may be influential in successful power mobility use and skill development. The body of evidence supporting outcomes for children using power mobility is primarily descriptive rather than experimental in nature, suggesting research in this area is in its infancy. © The Author(s) 2014.

  10. A systematic review of the sleep, sleepiness, and performance implications of limited wake shift work schedules.

    Science.gov (United States)

    Short, Michelle A; Agostini, Alexandra; Lushington, Kurt; Dorrian, Jillian

    2015-09-01

    The aim of this review was to identify which limited wake shift work schedules (LWSW) best promote sleep, alertness, and performance. LWSW are fixed work/rest cycles where the time-at-work does is ≤8 hours and there is >1 rest period per day, on average, for ≥2 consecutive days. These schedules are commonly used in safety-critical industries such as transport and maritime industries. Literature was sourced using PubMed, Embase, PsycInfo, Scopus, and Google Scholar databases. We identified 20 independent studies (plus a further 2 overlapping studies), including 5 laboratory and 17 field-based studies focused on maritime watch keepers, ship bridge officers, and long-haul train drivers. The measurement of outcome measures was varied, incorporating subjective and objective measures of sleep: sleep diaries (N=5), actigraphy (N=4), and polysomnography, (N=3); sleepiness: Karolinska Sleepiness Scale (N=5), visual analog scale (VAS) alertness (N=2) and author-derived measures (N=2); and performance: Psychomotor Vigilance Test (PVT) (N=5), Reaction Time or Vigilance tasks (N=4), Vector and Letter Cancellation Test (N=1), and subjective performance (N=2). Of the three primary rosters examined (6 hours-on/6 hours-off, 8 hours-on/8 hours-off and 4 hours-on/8 hours-off), the 4 hours-on/8 hours-off roster was associated with better sleep and lower levels of sleepiness. Individuals working 4 hours-on/8 hours-off rosters averaged 1 hour more sleep per night than those working 6 hours-on/6 hours-off and 1.3 hours more sleep than those working 8 hours-on/8 hours-off (Pwork, (ii) more frequent rest breaks, (iii) shifts that start and end at the same clock time every 24 hours, and (iv) work shifts commencing in the daytime (as opposed to night). The findings for performance remain incomplete due to the small number of studies containing a performance measure and the heterogeneity of performance measures within those that did. The literature supports the utility of LWSW in

  11. A new dimension of health care: systematic review of the uses, benefits, and limitations of social media for health communication.

    Science.gov (United States)

    Moorhead, S Anne; Hazlett, Diane E; Harrison, Laura; Carroll, Jennifer K; Irwin, Anthea; Hoving, Ciska

    2013-04-23

    There is currently a lack of information about the uses, benefits, and limitations of social media for health communication among the general public, patients, and health professionals from primary research. To review the current published literature to identify the uses, benefits, and limitations of social media for health communication among the general public, patients, and health professionals, and identify current gaps in the literature to provide recommendations for future health communication research. This paper is a review using a systematic approach. A systematic search of the literature was conducted using nine electronic databases and manual searches to locate peer-reviewed studies published between January 2002 and February 2012. The search identified 98 original research studies that included the uses, benefits, and/or limitations of social media for health communication among the general public, patients, and health professionals. The methodological quality of the studies assessed using the Downs and Black instrument was low; this was mainly due to the fact that the vast majority of the studies in this review included limited methodologies and was mainly exploratory and descriptive in nature. Seven main uses of social media for health communication were identified, including focusing on increasing interactions with others, and facilitating, sharing, and obtaining health messages. The six key overarching benefits were identified as (1) increased interactions with others, (2) more available, shared, and tailored information, (3) increased accessibility and widening access to health information, (4) peer/social/emotional support, (5) public health surveillance, and (6) potential to influence health policy. Twelve limitations were identified, primarily consisting of quality concerns and lack of reliability, confidentiality, and privacy. Social media brings a new dimension to health care as it offers a medium to be used by the public, patients, and health

  12. Systematic review of electronic surveillance of infectious diseases with emphasis on antimicrobial resistance surveillance in resource-limited settings.

    Science.gov (United States)

    Rattanaumpawan, Pinyo; Boonyasiri, Adhiratha; Vong, Sirenda; Thamlikitkul, Visanu

    2018-02-01

    Electronic surveillance of infectious diseases involves rapidly collecting, collating, and analyzing vast amounts of data from interrelated multiple databases. Although many developed countries have invested in electronic surveillance for infectious diseases, the system still presents a challenge for resource-limited health care settings. We conducted a systematic review by performing a comprehensive literature search on MEDLINE (January 2000-December 2015) to identify studies relevant to electronic surveillance of infectious diseases. Study characteristics and results were extracted and systematically reviewed by 3 infectious disease physicians. A total of 110 studies were included. Most surveillance systems were developed and implemented in high-income countries; less than one-quarter were conducted in low-or middle-income countries. Information technologies can be used to facilitate the process of obtaining laboratory, clinical, and pharmacologic data for the surveillance of infectious diseases, including antimicrobial resistance (AMR) infections. These novel systems require greater resources; however, we found that using electronic surveillance systems could result in shorter times to detect targeted infectious diseases and improvement of data collection. This study highlights a lack of resources in areas where an effective, rapid surveillance system is most needed. The availability of information technology for the electronic surveillance of infectious diseases, including AMR infections, will facilitate the prevention and containment of such emerging infectious diseases. Copyright © 2018 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  13. Systematic Expansion of Active Spaces beyond the CASSCF Limit: A GASSCF/SplitGAS Benchmark Study.

    Science.gov (United States)

    Vogiatzis, Konstantinos D; Li Manni, Giovanni; Stoneburner, Samuel J; Ma, Dongxia; Gagliardi, Laura

    2015-07-14

    The applicability and accuracy of the generalized active space self-consistent field, (GASSCF), and (SplitGAS) methods are presented. The GASSCF method enables the exploration of larger active spaces than with the conventional complete active space SCF, (CASSCF), by fragmentation of a large space into subspaces and by controlling the interspace excitations. In the SplitGAS method, the GAS configuration interaction, CI, expansion is further partitioned in two parts: the principal, which includes the most important configuration state functions, and an extended, containing less relevant but not negligible ones. An effective Hamiltonian is then generated, with the extended part acting as a perturbation to the principal space. Excitation energies of ozone, furan, pyrrole, nickel dioxide, and copper tetrachloride dianion are reported. Various partitioning schemes of the GASSCF and SplitGAS CI expansions are considered and compared with the complete active space followed by second-order perturbation theory, (CASPT2), and multireference CI method, (MRCI), or available experimental data. General guidelines for the optimum applicability of these methods are discussed together with their current limitations.

  14. Protein biomarkers on tissue as imaged via MALDI mass spectrometry: A systematic approach to study the limits of detection.

    Science.gov (United States)

    van de Ven, Stephanie M W Y; Bemis, Kyle D; Lau, Kenneth; Adusumilli, Ravali; Kota, Uma; Stolowitz, Mark; Vitek, Olga; Mallick, Parag; Gambhir, Sanjiv S

    2016-06-01

    MALDI mass spectrometry imaging (MSI) is emerging as a tool for protein and peptide imaging across tissue sections. Despite extensive study, there does not yet exist a baseline study evaluating the potential capabilities for this technique to detect diverse proteins in tissue sections. In this study, we developed a systematic approach for characterizing MALDI-MSI workflows in terms of limits of detection, coefficients of variation, spatial resolution, and the identification of endogenous tissue proteins. Our goal was to quantify these figures of merit for a number of different proteins and peptides, in order to gain more insight in the feasibility of protein biomarker discovery efforts using this technique. Control proteins and peptides were deposited in serial dilutions on thinly sectioned mouse xenograft tissue. Using our experimental setup, coefficients of variation were biomarkers and a new benchmarking strategy that can be used for comparing diverse MALDI-MSI workflows. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Effective convergence to complete orbital bases and to the atomic Hartree--Fock limit through systematic sequences of Gaussian primitives

    International Nuclear Information System (INIS)

    Schmidt, M.W.; Ruedenberg, K.

    1979-01-01

    Optimal starting points for expanding molecular orbitals in terms of atomic orbitals are the self-consistent-field orbitals of the free atoms and accurate information about the latter is essential for the construction of effective AO bases for molecular calculations. For expansions of atomic SCF orbitals in terms of Gaussian primitives, which are of particular interest for applications in polyatomic quantum chemistry, previous information has been limited in accuracy. In the present investigation a simple procedure is given for finding expansions of atomic self-consistent-field orbitals in terms of Gaussian primitives to arbitrarily high accuracy. The method furthermore opens the first avenue so far for approaching complete basis sets through systematic sequences of atomic orbitals

  16. [To consider negative viral loads below the limit of quantification can lead to errors in the diagnosis and treatment of hepatitis C virus infection].

    Science.gov (United States)

    Acero Fernández, Doroteo; Ferri Iglesias, María José; López Nuñez, Carme; Louvrie Freire, René; Aldeguer Manté, Xavier

    2013-01-01

    For years many clinical laboratories have routinely classified undetectable and unquantifiable levels of hepatitis C virus RNA (HCV-RNA) determined by RT-PCR as below limit of quantification (BLOQ). This practice might result in erroneous clinical decisions. To assess the frequency and clinical relevance of assuming that samples that are BLOQ are negative. We performed a retrospective analysis of RNA determinations performed between 2009 and 2011 (Cobas/Taqman, lower LOQ: 15 IU/ml). We distinguished between samples classified as «undetectable» and those classified as «<1.50E+01IU/mL» (BLOQ). We analyzed 2.432 HCV-RNA measurements in 1.371 patients. RNA was BLOQ in 26 samples (1.07%) from 23 patients (1.68%). BLOQ results were highly prevalent among patients receiving Peg-Riba: 23 of 216 samples (10.6%) from 20 of 88 patients receiving treatment (22.7%). The clinical impact of BLOQ RNA samples was as follows: a) 2 patients initially considered to have negative results subsequently showed quantifiable RNA; b) 8 of 9 patients (88.9%) with BLOQ RNA at week 4 of treatment later showed sustained viral response; c) 3 patients with BLOQ RNA at weeks 12 and 48 of treatment relapsed; d) 4 patients with BLOQ RNA at week 24 and/or later had partial or breakthrough treatment responses, and e) in 5 patients the impact were null or could not be ascertained. This study suggests that BLOQ HCV-RNA indicates viremia and that equating a BLOQ result with a negative result can lead to treatment errors. BLOQ results are highly prevalent in on-treatment patients. The results of HCV-RNA quantification should be classified clearly, distinguishing between undetectable levels and levels that are BLOQ. Copyright © 2013 Elsevier España, S.L. and AEEH y AEG. All rights reserved.

  17. Sleep deprivation in resident physicians, work hour limitations, and related outcomes: a systematic review of the literature.

    Science.gov (United States)

    Mansukhani, Meghna P; Kolla, Bhanu Prakash; Surani, Salim; Varon, Joseph; Ramar, Kannan

    2012-07-01

    Extended work hours, interrupted sleep, and shift work are integral parts of medical training among all specialties. The need for 24-hour patient care coverage and economic factors have resulted in prolonged work hours for resident physicians. This has traditionally been thought to enhance medical educational experience. These long and erratic work hours lead to acute and chronic sleep deprivation and poor sleep quality, resulting in numerous adverse consequences. Impairments may occur in several domains, including attention, cognition, motor skills, and mood. Resident performance, professionalism, safety, and well-being are affected by sleep deprivation, causing potentially adverse implications for patient care. Studies have shown adverse health consequences, motor vehicle accidents, increased alcohol and medication use, and serious medical errors to occur in association with both sleep deprivation and shift work. Resident work hour limitations have been mandated by the Accreditation Council for Graduate Medical Education in response to patient safety concerns. Studies evaluating the impact of these regulations on resident physicians have generated conflicting reports on patient outcomes, demonstrating only a modest increase in sleep duration for resident physicians, along with negative perceptions regarding their education. This literature review summarizes research on the effects of sleep deprivation and shift work, and examines current literature on the impact of recent work hour limitations on resident physicians and patient-related outcomes.

  18. Error Patterns

    NARCIS (Netherlands)

    Hoede, C.; Li, Z.

    2001-01-01

    In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,

  19. Polysomnographic Aspects of Sleep Architecture on Self-limited Epilepsy with Centrotemporal Spikes: A Systematic Review and Meta-analysis

    Directory of Open Access Journals (Sweden)

    Camila dos Santos Halal

    Full Text Available Self-limited epilepsy with centrotemporal spikes is the most common paediatric epileptic syndrome, with growing evidence linking it to various degrees and presentations of neuropsychological dysfunction. The objective of this study is to evaluate the possible sleep macro and microstructural alterations in children with this diagnosis. A systematic review of published manuscripts was carried out in Medline, LILACS and Scielo databases, using the MeSH terms epilepsy, sleep and polysomnography. From 753 retrieved references, 5 were selected, and data from macro and, when available, microstructure of sleep were extracted. Meta-analysis was performed with data from 4 studies using standardized mean difference. Findings were heterogeneous between studies, being the most frequent macrostructural findings a smaller proportion and greater latency of REM sleep in two studies and, in meta-analysis, a longer sleep latency was the most significant finding among epileptic patients. Only one study evaluated sleep microstructure, suggesting possible alterations in cyclic alternating pattern in diagnosed children. Studies evaluating macro and microstructure of sleep in children with self-limited epilepsy with centrotemporal spikes are necessary to a better understanding of mechanisms of the neuropsychologic disturbances that are frequently seen in children with this diagnosis.

  20. Average beta-beating from random errors

    CERN Document Server

    Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department

    2018-01-01

    The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic effect on the tune.

  1. The impact of the self-interaction error on the density functional theory description of dissociating radical cations: ionic and covalent dissociation limits.

    Science.gov (United States)

    Gräfenstein, Jürgen; Kraka, Elfi; Cremer, Dieter

    2004-01-08

    Self-interaction corrected density functional theory was used to determine the self-interaction error for dissociating one-electron bonds. The self-interaction error of the unpaired electron mimics nondynamic correlation effects that have no physical basis where these effects increase for increasing separation distance. For short distances the magnitude of the self-interaction error takes a minimum and increases then again for decreasing R. The position of the minimum of the magnitude of the self-interaction error influences the equilibrium properties of the one-electron bond in the radical cations H2+ (1), B2H4+ (2), and C2H6+ (3), which differ significantly. These differences are explained by hyperconjugative interactions in 2 and 3 that are directly reflected by the self-interaction error and its orbital contributions. The density functional theory description of the dissociating radical cations suffers not only from the self-interaction error but also from the simplified description of interelectronic exchange. The calculated differences between ionic and covalent dissociation for 1, 2, and 3 provide an excellent criterion for determining the basic failures of density functional theory, self-interaction corrected density functional theory, and other methods. Pure electronic, orbital relaxation, and geometric relaxation contributions to the self-interaction error are discussed. The relevance of these effects for the description of transition states and charge transfer complexes is shown. Suggestions for the construction of new exchange-correlation functionals are given. In this connection, the disadvantages of recently suggested self-interaction error-free density functional theory methods are emphasized. (c) 2004 American Institute of Physics

  2. Drought Persistence Errors in Global Climate Models

    Science.gov (United States)

    Moon, H.; Gudmundsson, L.; Seneviratne, S. I.

    2018-04-01

    The persistence of drought events largely determines the severity of socioeconomic and ecological impacts, but the capability of current global climate models (GCMs) to simulate such events is subject to large uncertainties. In this study, the representation of drought persistence in GCMs is assessed by comparing state-of-the-art GCM model simulations to observation-based data sets. For doing so, we consider dry-to-dry transition probabilities at monthly and annual scales as estimates for drought persistence, where a dry status is defined as negative precipitation anomaly. Though there is a substantial spread in the drought persistence bias, most of the simulations show systematic underestimation of drought persistence at global scale. Subsequently, we analyzed to which degree (i) inaccurate observations, (ii) differences among models, (iii) internal climate variability, and (iv) uncertainty of the employed statistical methods contribute to the spread in drought persistence errors using an analysis of variance approach. The results show that at monthly scale, model uncertainty and observational uncertainty dominate, while the contribution from internal variability is small in most cases. At annual scale, the spread of the drought persistence error is dominated by the statistical estimation error of drought persistence, indicating that the partitioning of the error is impaired by the limited number of considered time steps. These findings reveal systematic errors in the representation of drought persistence in current GCMs and suggest directions for further model improvement.

  3. Parental limited English proficiency and health outcomes for children with special health care needs: a systematic review.

    Science.gov (United States)

    Eneriz-Wiemer, Monica; Sanders, Lee M; Barr, Donald A; Mendoza, Fernando S

    2014-01-01

    One in 10 US adults of childbearing age has limited English proficiency (LEP). Parental LEP is associated with worse health outcomes among healthy children. The relationship of parental LEP to health outcomes for children with special health care needs (CSHCN) has not been systematically reviewed. To conduct a systematic review of peer-reviewed literature examining relationships between parental LEP and health outcomes for CSHCN. PubMed, Scopus, Cochrane Library, Social Science Abstracts, bibliographies of included studies. Key search term categories: language, child, special health care needs, and health outcomes. US studies published between 1964 and 2012 were included if: 1) subjects were CSHCN; 2) studies included some measure of parental LEP; 3) at least 1 outcome measure of child health status, access, utilization, costs, or quality; and 4) primary or secondary data analysis. Three trained reviewers independently screened studies and extracted data. Two separate reviewers appraised studies for methodological rigor and quality. From 2765 titles and abstracts, 31 studies met eligibility criteria. Five studies assessed child health status, 12 assessed access, 8 assessed utilization, 2 assessed costs, and 14 assessed quality. Nearly all (29 of 31) studies used only parent- or child-reported outcome measures, rather than objective measures. LEP parents were substantially more likely than English-proficient parents to report that their CSHCN were uninsured and had no usual source of care or medical home. LEP parents were also less likely to report family-centered care and satisfaction with care. Disparities persisted for children with LEP parents after adjustment for ethnicity and socioeconomic status. Parental LEP is independently associated with worse health care access and quality for CSHCN. Health care providers should recognize LEP as an independent risk factor for poor health outcomes among CSHCN. Emerging models of chronic disease care should integrate and

  4. Quality of life, psychological adjustment, and adaptive functioning of patients with intoxication-type inborn errors of metabolism - a systematic review.

    Science.gov (United States)

    Zeltner, Nina A; Huemer, Martina; Baumgartner, Matthias R; Landolt, Markus A

    2014-10-25

    In recent decades, considerable progress in diagnosis and treatment of patients with intoxication-type inborn errors of metabolism (IT-IEM) such as urea cycle disorders (UCD), organic acidurias (OA), maple syrup urine disease (MSUD), or tyrosinemia type 1 (TYR 1) has resulted in a growing group of long-term survivors. However, IT-IEM still require intense patient and caregiver effort in terms of strict dietetic and pharmacological treatment, and the threat of metabolic crises is always present. Furthermore, crises can affect the central nervous system (CNS), leading to cognitive, behavioural and psychiatric sequelae. Consequently, the well-being of the patients warrants consideration from both a medical and a psychosocial viewpoint by assessing health-related quality of life (HrQoL), psychological adjustment, and adaptive functioning. To date, an overview of findings on these topics for IT-IEM is lacking. We therefore aimed to systematically review the research on HrQoL, psychological adjustment, and adaptive functioning in patients with IT-IEM. Relevant databases were searched with predefined keywords. Study selection was conducted in two steps based on predefined criteria. Two independent reviewers completed the selection and data extraction. Eleven articles met the inclusion criteria. Studies were of varying methodological quality and used different assessment measures. Findings on HrQoL were inconsistent, with some showing lower and others showing higher or equal HrQoL for IT-IEM patients compared to norms. Findings on psychological adjustment and adaptive functioning were more consistent, showing mostly either no difference or worse adjustment of IT-IEM patients compared to norms. Single medical risk factors for HrQoL, psychological adjustment, or adaptive functioning have been addressed, while psychosocial risk factors have not been addressed. Data on HrQoL, psychological adjustment, and adaptive functioning for IT-IEM are sparse. Studies are inconsistent in

  5. NLO error propagation exercise: statistical results

    International Nuclear Information System (INIS)

    Pack, D.J.; Downing, D.J.

    1985-09-01

    Error propagation is the extrapolation and cumulation of uncertainty (variance) above total amounts of special nuclear material, for example, uranium or 235 U, that are present in a defined location at a given time. The uncertainty results from the inevitable inexactness of individual measurements of weight, uranium concentration, 235 U enrichment, etc. The extrapolated and cumulated uncertainty leads directly to quantified limits of error on inventory differences (LEIDs) for such material. The NLO error propagation exercise was planned as a field demonstration of the utilization of statistical error propagation methodology at the Feed Materials Production Center in Fernald, Ohio from April 1 to July 1, 1983 in a single material balance area formed specially for the exercise. Major elements of the error propagation methodology were: variance approximation by Taylor Series expansion; variance cumulation by uncorrelated primary error sources as suggested by Jaech; random effects ANOVA model estimation of variance effects (systematic error); provision for inclusion of process variance in addition to measurement variance; and exclusion of static material. The methodology was applied to material balance area transactions from the indicated time period through a FORTRAN computer code developed specifically for this purpose on the NLO HP-3000 computer. This paper contains a complete description of the error propagation methodology and a full summary of the numerical results of applying the methodlogy in the field demonstration. The error propagation LEIDs did encompass the actual uranium and 235 U inventory differences. Further, one can see that error propagation actually provides guidance for reducing inventory differences and LEIDs in future time periods

  6. A New Dimension of Health Care: Systematic Review of the Uses, Benefits, and Limitations of Social Media for Health Communication

    Science.gov (United States)

    Hazlett, Diane E; Harrison, Laura; Carroll, Jennifer K; Irwin, Anthea; Hoving, Ciska

    2013-01-01

    Background There is currently a lack of information about the uses, benefits, and limitations of social media for health communication among the general public, patients, and health professionals from primary research. Objective To review the current published literature to identify the uses, benefits, and limitations of social media for health communication among the general public, patients, and health professionals, and identify current gaps in the literature to provide recommendations for future health communication research. Methods This paper is a review using a systematic approach. A systematic search of the literature was conducted using nine electronic databases and manual searches to locate peer-reviewed studies published between January 2002 and February 2012. Results The search identified 98 original research studies that included the uses, benefits, and/or limitations of social media for health communication among the general public, patients, and health professionals. The methodological quality of the studies assessed using the Downs and Black instrument was low; this was mainly due to the fact that the vast majority of the studies in this review included limited methodologies and was mainly exploratory and descriptive in nature. Seven main uses of social media for health communication were identified, including focusing on increasing interactions with others, and facilitating, sharing, and obtaining health messages. The six key overarching benefits were identified as (1) increased interactions with others, (2) more available, shared, and tailored information, (3) increased accessibility and widening access to health information, (4) peer/social/emotional support, (5) public health surveillance, and (6) potential to influence health policy. Twelve limitations were identified, primarily consisting of quality concerns and lack of reliability, confidentiality, and privacy. Conclusions Social media brings a new dimension to health care as it offers a

  7. Medical Errors Reduction Initiative

    National Research Council Canada - National Science Library

    Mutter, Michael L

    2005-01-01

    The Valley Hospital of Ridgewood, New Jersey, is proposing to extend a limited but highly successful specimen management and medication administration medical errors reduction initiative on a hospital-wide basis...

  8. Systematic errors in digital volume correlation due to the self-heating effect of a laboratory x-ray CT scanner

    International Nuclear Information System (INIS)

    Wang, B; Pan, B; Tao, R; Lubineau, G

    2017-01-01

    The use of digital volume correlation (DVC) in combination with a laboratory x-ray computed tomography (CT) for full-field internal 3D deformation measurement of opaque materials has flourished in recent years. During x-ray tomographic imaging, the heat generated by the x-ray tube changes the imaging geometry of x-ray scanner, and further introduces noticeable errors in DVC measurements. In this work, to provide practical guidance high-accuracy DVC measurement, the errors in displacements and strains measured by DVC due to the self-heating for effect of a commercially available x-ray scanner were experimentally investigated. The errors were characterized by performing simple rescan tests with different scan durations. The results indicate that the maximum strain errors associated with the self-heating of the x-ray scanner exceed 400 µε . Possible approaches for minimizing or correcting these displacement and strain errors are discussed. Finally, a series of translation and uniaxial compression tests were performed, in which strain errors were detected and then removed using pre-established artificial dilatational strain-time curve. Experimental results demonstrate the efficacy and accuracy of the proposed strain error correction approach. (paper)

  9. Systematic errors in digital volume correlation due to the self-heating effect of a laboratory x-ray CT scanner

    KAUST Repository

    Wang, B

    2017-02-15

    The use of digital volume correlation (DVC) in combination with a laboratory x-ray computed tomography (CT) for full-field internal 3D deformation measurement of opaque materials has flourished in recent years. During x-ray tomographic imaging, the heat generated by the x-ray tube changes the imaging geometry of x-ray scanner, and further introduces noticeable errors in DVC measurements. In this work, to provide practical guidance high-accuracy DVC measurement, the errors in displacements and strains measured by DVC due to the self-heating for effect of a commercially available x-ray scanner were experimentally investigated. The errors were characterized by performing simple rescan tests with different scan durations. The results indicate that the maximum strain errors associated with the self-heating of the x-ray scanner exceed 400 µε. Possible approaches for minimizing or correcting these displacement and strain errors are discussed. Finally, a series of translation and uniaxial compression tests were performed, in which strain errors were detected and then removed using pre-established artificial dilatational strain-time curve. Experimental results demonstrate the efficacy and accuracy of the proposed strain error correction approach.

  10. Limits to modern contraceptive use among young women in developing countries: a systematic review of qualitative research

    Directory of Open Access Journals (Sweden)

    Wight Daniel

    2009-02-01

    Full Text Available Abstract Background Improving the reproductive health of young women in developing countries requires access to safe and effective methods of fertility control, but most rely on traditional rather than modern contraceptives such as condoms or oral/injectable hormonal methods. We conducted a systematic review of qualitative research to examine the limits to modern contraceptive use identified by young women in developing countries. Focusing on qualitative research allows the assessment of complex processes often missed in quantitative analyses. Methods Literature searches of 23 databases, including Medline, Embase and POPLINE®, were conducted. Literature from 1970–2006 concerning the 11–24 years age group was included. Studies were critically appraised and meta-ethnography was used to synthesise the data. Results Of the 12 studies which met the inclusion criteria, seven met the quality criteria and are included in the synthesis (six from sub-Saharan Africa; one from South-East Asia. Sample sizes ranged from 16 to 149 young women (age range 13–19 years. Four of the studies were urban based, one was rural, one semi-rural, and one mixed (predominantly rural. Use of hormonal methods was limited by lack of knowledge, obstacles to access and concern over side effects, especially fear of infertility. Although often more accessible, and sometimes more attractive than hormonal methods, condom use was limited by association with disease and promiscuity, together with greater male control. As a result young women often relied on traditional methods or abortion. Although the review was limited to five countries and conditions are not homogenous for all young women in all developing countries, the overarching themes were common across different settings and contexts, supporting the potential transferability of interventions to improve reproductive health. Conclusion Increasing modern contraceptive method use requires community-wide, multifaceted

  11. Current Guidelines Have Limited Applicability to Patients with Comorbid Conditions: A Systematic Analysis of Evidence-Based Guidelines

    Science.gov (United States)

    Lugtenberg, Marjolein; Burgers, Jako S.; Clancy, Carolyn; Westert, Gert P.; Schneider, Eric C.

    2011-01-01

    Background Guidelines traditionally focus on the diagnosis and treatment of single diseases. As almost half of the patients with a chronic disease have more than one disease, the applicability of guidelines may be limited. The aim of this study was to assess the extent that guidelines address comorbidity and to assess the supporting evidence of recommendations related to comorbidity. Methodology/Principal Findings We conducted a systematic analysis of evidence-based guidelines focusing on four highly prevalent chronic conditions with a high impact on quality of life: chronic obstructive pulmonary disease, depressive disorder, diabetes mellitus type 2, and osteoarthritis. Data were abstracted from each guideline on the extent that comorbidity was addressed (general comments, specific recommendations), the type of comorbidity discussed (concordant, discordant), and the supporting evidence of the comorbidity-related recommendations (level of evidence, translation of evidence). Of the 20 guidelines, 17 (85%) addressed the issue of comorbidity and 14 (70%) provided specific recommendations on comorbidity. In general, the guidelines included few recommendations on patients with comorbidity (mean 3 recommendations per guideline, range 0 to 26). Of the 59 comorbidity-related recommendations provided, 46 (78%) addressed concordant comorbidities, 8 (14%) discordant comorbidities, and for 5 (8%) the type of comorbidity was not specified. The strength of the supporting evidence was moderate for 25% (15/59) and low for 37% (22/59) of the recommendations. In addition, for 73% (43/59) of the recommendations the evidence was not adequately translated into the guidelines. Conclusions/Significance Our study showed that the applicability of current evidence-based guidelines to patients with comorbid conditions is limited. Most guidelines do not provide explicit guidance on treatment of patients with comorbidity, particularly for discordant combinations. Guidelines should be more

  12. Internal limiting membrane peeling and gas tamponade for myopic foveoschisis: a systematic review and meta-analysis.

    Science.gov (United States)

    Meng, Bo; Zhao, Lu; Yin, Yi; Li, Hongyang; Wang, Xiaolei; Yang, Xiufen; You, Ran; Wang, Jialin; Zhang, Youjing; Wang, Hui; Du, Ran; Wang, Ningli; Zhan, Siyan; Wang, Yanling

    2017-09-08

    Myopic foveoschisis (MF) is among the leading causes of visual loss in high myopia. However, it remains controversial whether internal limiting membrane (ILM) peeling or gas tamponade is necessary treatment option for MF. PubMed, EMBASE, CBM, CNKI, WANFANG DATA and VIP databases were systematically reviewed. Outcome indicators were myopic foveoschisis resolution rate, visual acuity improvement and postoperative complications. Nine studies that included 239 eyes were selected. The proportion of resolution of foveoschisis was higher in ILM peeling group than non-ILM peeling group (OR = 2.15, 95% CI: 1.06-4.35; P = 0.03). The proportion of postoperative complications was higher in Tamponade group than non-Tamponade group (OR = 10.81, 95% CI: 1.26-93.02; P = 0.03). However, the proportion of visual acuity improvement (OR = 1.63, 95% CI: 0.56-4.80; P = 0.37) between ILM peeling group and non-ILM peeling group and the proportion of resolution of foveoschisis (OR = 1.80, 95% CI: 0.76-4.28; P = 0.18) between Tamponade group and non-Tamponade group were similar. Vitrectomy with internal limiting membrane peeling could contribute to better resolution of myopic foveoschisis than non-peeling, however it does not significantly influence the proportion of visual acuity improvement and postoperative complications. Vitrectomy with gas tamponade is associated with more complications than non-tamponade and does not significantly influence the proportion of visual acuity improvement and resolution of myopic foveoschisis.

  13. Internal limiting membrane peeling or not: a systematic review and meta-analysis of idiopathic macular pucker surgery.

    Science.gov (United States)

    Fang, Xiao-Ling; Tong, Yao; Zhou, Ya-Li; Zhao, Pei-Quan; Wang, Zhao-Yang

    2017-11-01

    To determine whether internal limiting membrane (ILM) peeling improves anatomical and functional outcomes in idiopathic macular pucker (IMP)/epiretinal membrane (ERM) surgery in this systematic review and meta-analysis. We searched the PubMed, Medline, Web of Science, Cochrane, Ovid MEDLINE, ClinicalTrials.gov and CNKI databases for studies published before 15 September 2016. The eligibility criteria included studies comparing ILM peeling versus no-peeling for IMP surgery. Thirteen articles (10 retrospective cohort studies, 1 prospective cohort study and 2 randomised controlled trials (RCTs)) were included in the review. Primary outcomes: no differences were observed in the best-corrected visual acuity (BCVA) or central macular thickness (CMT) at 12 months; however, lower ERM recurrence (OR, 0.13; 95% CI 0.04 to 0.41; p=0.0004) and reoperation rates (OR, 0.10; 95% CI 0.02 to 0.49; p=0.004) that favoured ILM peeling were observed at the final follow-up. no difference was observed in BCVA at 3, 6 months, the final follow-up or in CMT at 3, 6 months, the final follow-up. Significantly increased CMT, which favoured ILM peeling, was observed at the final follow-up (p=0.002) in the RCTs. ILM peeling yielded greater anatomical success, but no improvement in functional outcomes as the treatment of choice for patients undergoing IMP surgery. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  14. EFFECT OF INTERNAL LIMITING MEMBRANE PEELING DURING VITRECTOMY FOR DIABETIC MACULAR EDEMA: Systematic Review and Meta-analysis.

    Science.gov (United States)

    Nakajima, Takuya; Roggia, Murilo F; Noda, Yasuo; Ueta, Takashi

    2015-09-01

    To evaluate the effect of internal limiting membrane (ILM) peeling during vitrectomy for diabetic macular edema. MEDLINE, EMBASE, and CENTRAL were systematically reviewed. Eligible studies included randomized or nonrandomized studies that compared surgical outcomes of vitrectomy with or without ILM peeling for diabetic macular edema. The primary and secondary outcome measures were postoperative best-corrected visual acuity and central macular thickness. Meta-analysis on mean differences between vitrectomy with and without ILM peeling was performed using inverse variance method in random effects. Five studies (7 articles) with 741 patients were eligible for analysis. Superiority (95% confidence interval) in postoperative best-corrected visual acuity in ILM peeling group compared with nonpeeling group was 0.04 (-0.05 to 0.13) logMAR (equivalent to 2.0 ETDRS letters, P = 0.37), and superiority in best-corrected visual acuity change in ILM peeling group was 0.04 (-0.02 to 0.09) logMAR (equivalent to 2.0 ETDRS letters, P = 0.16). There was no significant difference in postoperative central macular thickness and central macular thickness reduction between the two groups. The visual acuity outcomes using pars plana vitrectomy with ILM peeling versus no ILM peeling were not significantly different. A larger randomized prospective study would be necessary to adequately address the effectiveness of ILM peeling on visual acuity outcomes.

  15. IMPACT OF INTERNAL LIMITING MEMBRANE PEELING ON MACULAR HOLE REOPENING: A Systematic Review and Meta-Analysis.

    Science.gov (United States)

    Rahimy, Ehsan; McCannel, Colin A

    2016-04-01

    To assess the literature regarding macular hole reopening rates stratified by whether the internal limiting membrane (ILM) was peeled during vitrectomy surgery. Systematic review and meta-analysis of studies reporting on macular hole reopenings among previously surgically closed idiopathic macular holes. A comprehensive literature search using the National Library of Medicine PubMed interface was used to identify potentially eligible publications in English. The minimum mean follow-up period for reports to be included in this study was 12 months. Analysis was divided into eyes that underwent vitrectomy with and without ILM peeling. The primary outcome parameter was the proportion of macular hole reopenings among previously closed holes between the two groups. Secondary outcome parameters included duration from initial surgery to hole reopening and preoperative and postoperative best-corrected correct visual acuities among the non-ILM peeling and ILM peeling groups. A total of 50 publications reporting on 5,480 eyes met inclusion criteria and were assessed in this meta-analysis. The reopening rate without ILM peeling was 7.12% (125 of 1,756 eyes), compared with 1.18% (44 of 3,724 eyes) with ILM peeling (odds ratio: 0.16; 95% confidence interval: 0.11-0.22; Fisher's exact test: P peeling during macular hole surgery reduces the likelihood of macular hole reopening.

  16. Development of an iterative reconstruction method to overcome 2D detector low resolution limitations in MLC leaf position error detection for 3D dose verification in IMRT

    NARCIS (Netherlands)

    Visser, Ruurd; J., Godart; Wauben, D.J.L.; Langendijk, J.; van 't Veld, A.A.; Korevaar, E.W.

    2016-01-01

    The objective of this study was to introduce a new iterative method to reconstruct multi leaf collimator (MLC) positions based on low resolution ionization detector array measurements and to evaluate its error detection performance. The iterative reconstruction method consists of a fluence model, a

  17. Downlink Error Rates of Half-duplex Users in Full-duplex Networks over a Laplacian Inter-User Interference Limited and EGK fading

    KAUST Repository

    Soury, Hamza; Elsawy, Hesham; Alouini, Mohamed-Slim

    2017-01-01

    This paper develops a mathematical framework to study downlink error rates and throughput for half-duplex (HD) terminals served by a full-duplex (FD) base station (BS). The developed model is used to motivate long term pairing for users that have

  18. Learning from Errors

    Directory of Open Access Journals (Sweden)

    MA. Lendita Kryeziu

    2015-06-01

    Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.

  19. Error-finding and error-correcting methods for the start-up of the SLC

    International Nuclear Information System (INIS)

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.; Selig, L.J.

    1987-02-01

    During the commissioning of an accelerator, storage ring, or beam transfer line, one of the important tasks of an accelertor physicist is to check the first-order optics of the beam line and to look for errors in the system. Conceptually, it is important to distinguish between techniques for finding the machine errors that are the cause of the problem and techniques for correcting the beam errors that are the result of the machine errors. In this paper we will limit our presentation to certain applications of these two methods for finding or correcting beam-focus errors and beam-kick errors that affect the profile and trajectory of the beam respectively. Many of these methods have been used successfully in the commissioning of SLC systems. In order not to waste expensive beam time we have developed and used a beam-line simulator to test the ideas that have not been tested experimentally. To save valuable physicist's time we have further automated the beam-kick error-finding procedures by adopting methods from the field of artificial intelligence to develop a prototype expert system. Our experience with this prototype has demonstrated the usefulness of expert systems in solving accelerator control problems. The expert system is able to find the same solutions as an expert physicist but in a more systematic fashion. The methods used in these procedures and some of the recent applications will be described in this paper

  20. Quantification and handling of sampling errors in instrumental measurements: a case study

    DEFF Research Database (Denmark)

    Andersen, Charlotte Møller; Bro, R.

    2004-01-01

    in certain situations, the effect of systematic errors is also considerable. The relevant errors contributing to the prediction error are: error in instrumental measurements (x-error), error in reference measurements (y-error), error in the estimated calibration model (regression coefficient error) and model...

  1. A procedure for the significance testing of unmodeled errors in GNSS observations

    Science.gov (United States)

    Li, Bofeng; Zhang, Zhetao; Shen, Yunzhong; Yang, Ling

    2018-01-01

    It is a crucial task to establish a precise mathematical model for global navigation satellite system (GNSS) observations in precise positioning. Due to the spatiotemporal complexity of, and limited knowledge on, systematic errors in GNSS observations, some residual systematic errors would inevitably remain even after corrected with empirical model and parameterization. These residual systematic errors are referred to as unmodeled errors. However, most of the existing studies mainly focus on handling the systematic errors that can be properly modeled and then simply ignore the unmodeled errors that may actually exist. To further improve the accuracy and reliability of GNSS applications, such unmodeled errors must be handled especially when they are significant. Therefore, a very first question is how to statistically validate the significance of unmodeled errors. In this research, we will propose a procedure to examine the significance of these unmodeled errors by the combined use of the hypothesis tests. With this testing procedure, three components of unmodeled errors, i.e., the nonstationary signal, stationary signal and white noise, are identified. The procedure is tested by using simulated data and real BeiDou datasets with varying error sources. The results show that the unmodeled errors can be discriminated by our procedure with approximately 90% confidence. The efficiency of the proposed procedure is further reassured by applying the time-domain Allan variance analysis and frequency-domain fast Fourier transform. In summary, the spatiotemporally correlated unmodeled errors are commonly existent in GNSS observations and mainly governed by the residual atmospheric biases and multipath. Their patterns may also be impacted by the receiver.

  2. What value, detection limits

    International Nuclear Information System (INIS)

    Currie, L.A.

    1986-01-01

    Specific approaches and applications of LLD's to nuclear and ''nuclear-related'' measurements are presented in connection with work undertaken for the U.S. Nuclear Regulatory Commission and the International Atomic Energy Agency. In this work, special attention was given to assumptions and potential error sources, as well as to different types of analysis. For the former, the authors considered random and systematic error associated with the blank and the calibration and sample preparation processes, as well as issues relating to the nature of the random error distributions. Analysis types considered included continuous monitoring, ''simple counting'' involving scalar quantities, and spectrum fitting involving data vectors. The investigation of data matrices and multivariate analysis is also described. The most important conclusions derived from this study are: that there is a significant lack of communication and compatibility resulting from diverse terminology and conceptual bases - including no-basis ''ad hoc'' definitions; that the distinction between detection decisions and detection limits is frequently lost sight of; and that quite erroneous LOD estimates follow from inadequate consideration of the actual variability of the blank, and systematic error associated with the blank, the calibration-recovery factor, matrix effects, and ''black box'' data reduction models

  3. Einstein's error

    International Nuclear Information System (INIS)

    Winterflood, A.H.

    1980-01-01

    In discussing Einstein's Special Relativity theory it is claimed that it violates the principle of relativity itself and that an anomalous sign in the mathematics is found in the factor which transforms one inertial observer's measurements into those of another inertial observer. The apparent source of this error is discussed. Having corrected the error a new theory, called Observational Kinematics, is introduced to replace Einstein's Special Relativity. (U.K.)

  4. Systematic review of ERP and fMRI studies investigating inhibitory control and error processing in people with substance dependence and behavioural addictions

    NARCIS (Netherlands)

    Luijten, M.; Machielsen, M.W.J.; Veltman, D.J.; Hester, R.; de Haan, L.; Franken, I.H.A.

    2014-01-01

    Background: Several current theories emphasize the role of cognitive control in addiction. The present review evaluates neural deficits in the domains of inhibitory control and error processing in individuals with substance dependence and in those showing excessive addiction-like behaviours. The

  5. Systematic review of ERP and fMRI studies investigating inhibitory control and error processing in people with substance dependence and behavioural addictions

    NARCIS (Netherlands)

    Luijten, Maartje; Machielsen, Marise W. J.; Veltman, Dick J.; Hester, Robert; de Haan, Lieuwe; Franken, Ingmar H. A.

    2014-01-01

    Several current theories emphasize the role of cognitive control in addiction. The present review evaluates neural deficits in the domains of inhibitory control and error processing in individuals with substance dependence and in those showing excessive addiction-like behaviours. The combined

  6. Measurement Error in Education and Growth Regressions

    NARCIS (Netherlands)

    Portela, M.; Teulings, C.N.; Alessie, R.

    The perpetual inventory method used for the construction of education data per country leads to systematic measurement error. This paper analyses the effect of this measurement error on GDP regressions. There is a systematic difference in the education level between census data and observations

  7. Measurement error in education and growth regressions

    NARCIS (Netherlands)

    Portela, Miguel; Teulings, Coen; Alessie, R.

    2004-01-01

    The perpetual inventory method used for the construction of education data per country leads to systematic measurement error. This paper analyses the effect of this measurement error on GDP regressions. There is a systematic difference in the education level between census data and observations

  8. Thermodynamics of Error Correction

    Directory of Open Access Journals (Sweden)

    Pablo Sartori

    2015-12-01

    Full Text Available Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  9. Uncorrected refractive errors.

    Science.gov (United States)

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  10. Uncorrected refractive errors

    Directory of Open Access Journals (Sweden)

    Kovin S Naidoo

    2012-01-01

    Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.

  11. The potentiometric and laser RAMAN study of the hydrolysis of uranyl chloride under physiological conditions and the effect of systematic and random errors on the hydrolysis constants

    International Nuclear Information System (INIS)

    Deschenes, L.L.; Kramer, G.H.; Monserrat, K.J.; Robinson, P.A.

    1986-12-01

    The hydrolysis of uranyl ions in 0.15 mol/L (Na)C1 solution at 37 degrees Celsius has been studied by potentiometric titration. The results were consistent with the formation of (UO 2 ) 2 (OH) 2 , (UO 2 ) 3 (OH) 4 , (UO 2 ) 3 (OH) 5 and (UO 2 ) 4 (OH) 7 . The stability constants, which were evaluated using a version of MINIQUAD, were found to be: log β 22 = -5.693 ± 0.007, log β 34 = -11.499 ± 0.024, log β 35 = -16.001 ± 0.050, log β 47 = -21.027 ± 0.051. Laser Raman spectroscopy has been used to identify the products including (UO 2 ) 4 (OH) 7 species. The difficulties in identifying the chemical species in solution and the effect of small errors on this selection has also been investigated by computer simulation. The results clearly indicate that small errors can lead to the selection of species that may not exist

  12. Limited evidence for intranasal fentanyl in the emergency department and the prehospital setting--a systematic review

    DEFF Research Database (Denmark)

    Hansen, Morten Sejer; Dahl, Jørgen Berg

    2013-01-01

    The intranasal (IN) mode of application may be a valuable asset in non-invasive pain management. Fentanyl demonstrates pharmacokinetic and pharmacodynamic properties that are desirable in the management of acute pain, and IN fentanyl may be of value in the prehospital setting. The aim...... of this systematic review was to evaluate the current evidence for the use of IN fentanyl in the emergency department (ED) and prehospital setting....

  13. Protocol for the systematic review of the prevention, treatment and public health management of impetigo, scabies and fungal skin infections in resource-limited settings.

    Science.gov (United States)

    May, Philippa; Bowen, Asha; Tong, Steven; Steer, Andrew; Prince, Sam; Andrews, Ross; Currie, Bart; Carapetis, Jonathan

    2016-09-23

    Impetigo, scabies, and fungal skin infections disproportionately affect populations in resource-limited settings. Evidence for standard treatment of skin infections predominantly stem from hospital-based studies in high-income countries. The evidence for treatment in resource-limited settings is less clear, as studies in these populations may lack randomisation and control groups for cultural, ethical or economic reasons. Likewise, a synthesis of the evidence for public health control within endemic populations is also lacking. We propose a systematic review of the evidence for the prevention, treatment and public health management of skin infections in resource-limited settings, to inform the development of guidelines for the standardised and streamlined clinical and public health management of skin infections in endemic populations. The protocol has been designed in line with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Protocols statement. All trial designs and analytical observational study designs will be eligible for inclusion. A systematic search of the peer-reviewed literature will include PubMed, Excertpa Medica and Global Health. Grey literature databases will also be systematically searched, and clinical trials registries scanned for future relevant studies. The primary outcome of interest will be the clinical cure or decrease in prevalence of impetigo, scabies, crusted scabies, tinea capitis, tinea corporis or tinea unguium. Two independent reviewers will perform eligibility assessment and data extraction using standardised electronic forms. Risk of bias assessment will be undertaken by two independent reviewers according to the Cochrane Risk of Bias tool. Data will be tabulated and narratively synthesised. We expect there will be insufficient data to conduct meta-analysis. The final body of evidence will be reported against the Grades of Recommendation, Assessment, Development and Evaluation grading system. The evidence

  14. Probabilistic performance estimators for computational chemistry methods: The empirical cumulative distribution function of absolute errors

    Science.gov (United States)

    Pernot, Pascal; Savin, Andreas

    2018-06-01

    Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.

  15. Boost first, eliminate systematic error, and individualize CTV to PTV margin when treating lymph nodes in high-risk prostate cancer

    International Nuclear Information System (INIS)

    Rossi, Peter J.; Schreibmann, Eduard; Jani, Ashesh B.; Master, Viraj A.; Johnstone, Peter A.S.

    2009-01-01

    Purpose: The purpose of this report is to evaluate the movement of the planning target volume (PTV) in relation to the pelvic lymph nodes (PLNs) during treatment of high-risk prostate cancer. Patients and methods: We reviewed the daily treatment course of ten consecutively treated patients with high-risk prostate cancer. PLNs were included in the initial PTV for each patient. Daily on-board imaging of gold fiducial markers implanted in the prostate was used; daily couch shifts were made as needed and recorded. We analyzed how the daily couch shifts impacted the dose delivered to the PLN. Results: A PLN clinical target volume was identified in each man using CT-based treatment planning. At treatment planning, median minimum planned dose to the PLN was 95%, maximum 101%, and mean 97%. Daily couch shifting to prostate markers degraded the dose slightly; median minimum dose to the PLN was 92%, maximum, 101%, and mean delivered, 96%. We found two cases, where daily systematic shifts resulted in an underdosing of the PLN by 9% and 29%, respectively. In other cases, daily shifts were random and led to a mean 2.2% degradation of planned to delivered PLN dose. Conclusions: We demonstrated degradation of the delivered dose to PLN PTV, which may occur if daily alignment only to the prostate is considered. To improve PLN PTV, it maybe preferable to deliver the prostate/boost treatment first, and adapt the PTV of the pelvic/nodal treatment to uncertainties documented during prostate/boost treatment

  16. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  17. Theory of Test Translation Error

    Science.gov (United States)

    Solano-Flores, Guillermo; Backhoff, Eduardo; Contreras-Nino, Luis Angel

    2009-01-01

    In this article, we present a theory of test translation whose intent is to provide the conceptual foundation for effective, systematic work in the process of test translation and test translation review. According to the theory, translation error is multidimensional; it is not simply the consequence of defective translation but an inevitable fact…

  18. Error budget calculations in laboratory medicine: linking the concepts of biological variation and allowable medical errors

    NARCIS (Netherlands)

    Stroobants, A. K.; Goldschmidt, H. M. J.; Plebani, M.

    2003-01-01

    Background: Random, systematic and sporadic errors, which unfortunately are not uncommon in laboratory medicine, can have a considerable impact on the well being of patients. Although somewhat difficult to attain, our main goal should be to prevent all possible errors. A good insight on error-prone

  19. The impact of initiatives to limit the advertising of food and beverage products to children: a systematic review.

    Science.gov (United States)

    Galbraith-Emami, S; Lobstein, T

    2013-12-01

    In response to increasing evidence that advertising of foods and beverages affects children's food choices and food intake, several national governments and many of the world's larger food and beverage manufacturers have acted to restrict the marketing of their products to children or to advertise only 'better for you' products or 'healthier dietary choices' to children. Independent assessment of the impact of these pledges has been difficult due to the different criteria being used in regulatory and self-regulatory regimes. In this paper, we undertook a systematic review to examine the data available on levels of exposure of children to the advertising of less healthy foods since the introduction of the statutory and voluntary codes. The results indicate a sharp division in the evidence, with scientific, peer-reviewed papers showing that high levels of such advertising of less healthy foods continue to be found in several different countries worldwide. In contrast, the evidence provided in industry-sponsored reports indicates a remarkably high adherence to voluntary codes. We conclude that adherence to voluntary codes may not sufficiently reduce the advertising of foods which undermine healthy diets, or reduce children's exposure to this advertising. © 2013 The Authors. obesity reviews © 2013 International Association for the Study of Obesity.

  20. Orbit error characteristic and distribution of TLE using CHAMP orbit data

    Science.gov (United States)

    Xu, Xiao-li; Xiong, Yong-qing

    2018-02-01

    Space object orbital covariance data is required for collision risk assessments, but publicly accessible two line element (TLE) data does not provide orbital error information. This paper compared historical TLE data and GPS precision ephemerides of CHAMP to assess TLE orbit accuracy from 2002 to 2008, inclusive. TLE error spatial variations with longitude and latitude were calculated to analyze error characteristics and distribution. The results indicate that TLE orbit data are systematically biased from the limited SGP4 model. The biases can reach the level of kilometers, and the sign and magnitude are correlate significantly with longitude.

  1. Error management process for power stations

    International Nuclear Information System (INIS)

    Hirotsu, Yuko; Takeda, Daisuke; Fujimoto, Junzo; Nagasaka, Akihiko

    2016-01-01

    The purpose of this study is to establish 'error management process for power stations' for systematizing activities for human error prevention and for festering continuous improvement of these activities. The following are proposed by deriving concepts concerning error management process from existing knowledge and realizing them through application and evaluation of their effectiveness at a power station: an entire picture of error management process that facilitate four functions requisite for maraging human error prevention effectively (1. systematizing human error prevention tools, 2. identifying problems based on incident reports and taking corrective actions, 3. identifying good practices and potential problems for taking proactive measures, 4. prioritizeng human error prevention tools based on identified problems); detail steps for each activity (i.e. developing an annual plan for human error prevention, reporting and analyzing incidents and near misses) based on a model of human error causation; procedures and example of items for identifying gaps between current and desired levels of executions and outputs of each activity; stages for introducing and establishing the above proposed error management process into a power station. By giving shape to above proposals at a power station, systematization and continuous improvement of activities for human error prevention in line with the actual situation of the power station can be expected. (author)

  2. Error Budgeting

    Energy Technology Data Exchange (ETDEWEB)

    Vinyard, Natalia Sergeevna [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Perry, Theodore Sonne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Usov, Igor Olegovich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-10-04

    We calculate opacity from k (hn)=-ln[T(hv)]/pL, where T(hv) is the transmission for photon energy hv, p is sample density, and L is path length through the sample. The density and path length are measured together by Rutherford backscatter. Δk = $\\partial k$\\ $\\partial T$ ΔT + $\\partial k$\\ $\\partial (pL)$. We can re-write this in terms of fractional error as Δk/k = Δ1n(T)/T + Δ(pL)/(pL). Transmission itself is calculated from T=(U-E)/(V-E)=B/B0, where B is transmitted backlighter (BL) signal and B0 is unattenuated backlighter signal. Then ΔT/T=Δln(T)=ΔB/B+ΔB0/B0, and consequently Δk/k = 1/T (ΔB/B + ΔB$_0$/B$_0$ + Δ(pL)/(pL). Transmission is measured in the range of 0.2

  3. Implication of spot position error on plan quality and patient safety in pencil-beam-scanning proton therapy

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Juan; Beltran, Chris J., E-mail: beltran.chris@mayo.edu; Herman, Michael G. [Division of Medical Physics, Department of Radiation Oncology, Mayo Clinic, Rochester, Minnesota 55905 (United States)

    2014-08-15

    Purpose: To quantitatively and systematically assess dosimetric effects induced by spot positioning error as a function of spot spacing (SS) on intensity-modulated proton therapy (IMPT) plan quality and to facilitate evaluation of safety tolerance limits on spot position. Methods: Spot position errors (PE) ranging from 1 to 2 mm were simulated. Simple plans were created on a water phantom, and IMPT plans were calculated on two pediatric patients with a brain tumor of 28 and 3 cc, respectively, using a commercial planning system. For the phantom, a uniform dose was delivered to targets located at different depths from 10 to 20 cm with various field sizes from 2{sup 2} to 15{sup 2} cm{sup 2}. Two nominal spot sizes, 4.0 and 6.6 mm of 1 σ in water at isocenter, were used for treatment planning. The SS ranged from 0.5 σ to 1.5 σ, which is 2–6 mm for the small spot size and 3.3–9.9 mm for the large spot size. Various perturbation scenarios of a single spot error and systematic and random multiple spot errors were studied. To quantify the dosimetric effects, percent dose error (PDE) depth profiles and the value of percent dose error at the maximum dose difference (PDE [ΔDmax]) were used for evaluation. Results: A pair of hot and cold spots was created per spot shift. PDE[ΔDmax] is found to be a complex function of PE, SS, spot size, depth, and global spot distribution that can be well defined in simple models. For volumetric targets, the PDE [ΔDmax] is not noticeably affected by the change of field size or target volume within the studied ranges. In general, reducing SS decreased the dose error. For the facility studied, given a single spot error with a PE of 1.2 mm and for both spot sizes, a SS of 1σ resulted in a 2% maximum dose error; a SS larger than 1.25 σ substantially increased the dose error and its sensitivity to PE. A similar trend was observed in multiple spot errors (both systematic and random errors). Systematic PE can lead to noticeable hot

  4. Definition of the limit of quantification in the presence of instrumental and non-instrumental errors. Comparison among various definitions applied to the calibration of zinc by inductively coupled plasma-mass spectrometry

    Science.gov (United States)

    Badocco, Denis; Lavagnini, Irma; Mondin, Andrea; Favaro, Gabriella; Pastore, Paolo

    2015-12-01

    The limit of quantification (LOQ) in the presence of instrumental and non-instrumental errors was proposed. It was theoretically defined combining the two-component variance regression and LOQ schemas already present in the literature and applied to the calibration of zinc by the ICP-MS technique. At low concentration levels, the two-component variance LOQ definition should be always used above all when a clean room is not available. Three LOQ definitions were accounted for. One of them in the concentration and two in the signal domain. The LOQ computed in the concentration domain, proposed by Currie, was completed by adding the third order terms in the Taylor expansion because they are of the same order of magnitude of the second ones so that they cannot be neglected. In this context, the error propagation was simplified by eliminating the correlation contributions by using independent random variables. Among the signal domain definitions, a particular attention was devoted to the recently proposed approach based on at least one significant digit in the measurement. The relative LOQ values resulted very large in preventing the quantitative analysis. It was found that the Currie schemas in the signal and concentration domains gave similar LOQ values but the former formulation is to be preferred as more easily computable.

  5. Finding beam focus errors automatically

    International Nuclear Information System (INIS)

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.

    1987-01-01

    An automated method for finding beam focus errors using an optimization program called COMFORT-PLUS. The steps involved in finding the correction factors using COMFORT-PLUS has been used to find the beam focus errors for two damping rings at the SLAC Linear Collider. The program is to be used as an off-line program to analyze actual measured data for any SLC system. A limitation on the application of this procedure is found to be that it depends on the magnitude of the machine errors. Another is that the program is not totally automated since the user must decide a priori where to look for errors

  6. Methodological limitations of psychosocial interventions in patients with an implantable cardioverter-defibrillator (ICD A systematic review

    Directory of Open Access Journals (Sweden)

    Ockene Ira S

    2009-12-01

    Full Text Available Abstract Background Despite the potentially life-saving benefits of the implantable cardioverter-defibrillator (ICD, a significant group of patients experiences emotional distress after ICD implantation. Different psychosocial interventions have been employed to improve this condition, but previous reviews have suggested that methodological issues may limit the validity of such interventions. Aim: To review the methodology of previously published studies of psychosocial interventions in ICD patients, according to CONSORT statement guidelines for non-pharmacological interventions, and provide recommendations for future research. Methods We electronically searched the PubMed, PsycInfo and Cochrane databases. To be included, studies needed to be published in a peer-reviewed journal between 1980 and 2008, to involve a human population aged 18+ years and to have an experimental design. Results Twelve studies met the eligibility criteria. Samples were generally small. Interventions were very heterogeneous; most studies used cognitive behavioural therapy (CBT and exercise programs either as unique interventions or as part of a multi-component program. Overall, studies showed a favourable effect on anxiety (6/9 and depression (4/8. CBT appeared to be the most effective intervention. There was no effect on the number of shocks and arrhythmic events, probably because studies were not powered to detect such an effect. Physical functioning improved in the three studies evaluating this outcome. Lack of information about the indication for ICD implantation (primary vs. secondary prevention, limited or no information regarding use of anti-arrhythmic (9/12 and psychotropic (10/12 treatment, lack of assessments of providers' treatment fidelity (12/12 and patients' adherence to the intervention (11/12 were the most common methodological limitations. Conclusions Overall, this review supports preliminary evidence of a positive effect of psychosocial interventions

  7. Selection criteria limit generalizability of smoking pharmacotherapy studies differentially across clinical trials and laboratory studies: A systematic review on varenicline.

    Science.gov (United States)

    Motschman, Courtney A; Gass, Julie C; Wray, Jennifer M; Germeroth, Lisa J; Schlienz, Nicolas J; Munoz, Diana A; Moore, Faith E; Rhodes, Jessica D; Hawk, Larry W; Tiffany, Stephen T

    2016-12-01

    The selection criteria used in clinical trials for smoking cessation and in laboratory studies that seek to understand mechanisms responsible for treatment outcomes may limit their generalizability to one another and to the general population. We reviewed studies on varenicline versus placebo and compared eligibility criteria and participant characteristics of clinical trials (N=23) and laboratory studies (N=22) across study type and to nationally representative survey data on adult, daily USA smokers (2014 National Health Interview Survey; 2014 National Survey on Drug Use and Health). Relative to laboratory studies, clinical trials more commonly reported excluding smokers who were unmotivated to quit and for specific medical conditions (e.g., cardiovascular disease, COPD), although both study types frequently reported excluding for general medical or psychiatric reasons. Laboratory versus clinical samples smoked less, had lower nicotine dependence, were younger, and more homogeneous with respect to smoking level and nicotine dependence. Application of common eligibility criteria to national survey data resulted in considerable elimination of the daily-smoking population for both clinical trials (≥47%) and laboratory studies (≥39%). Relative to the target population, studies in this review recruited participants who smoked considerably more and had a later smoking onset age, and were under-representative of Caucasians. Results suggest that selection criteria of varenicline studies limit generalizability in meaningful ways, and differences in criteria across study type may undermine efforts at translational research. Recommendations for improvements in participant selection and reporting standards are discussed. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. Healthcare users' experiences of communicating with healthcare professionals about children who have life-limiting conditions: a qualitative systematic review protocol.

    Science.gov (United States)

    Ekberg, Stuart; Bradford, Natalie; Herbert, Anthony; Danby, Susan; Yates, Patsy

    2015-11-01

    -quality communication with and about children who have life-limiting conditions, this does not mean that these stakeholders necessarily share the same perspective of what constitutes high-quality communication and the best way of accomplishing this. Focusing on healthcare users' experiences of communication with healthcare professionals about children who have life-limiting conditions, the present review will explore the subjective impact of professionals' communication on the people for whom they provide care.It may be necessary to consider a range of contextual factors to understand healthcare users' experiences of communicating with healthcare professionals about children who have life-limiting conditions. For instance, age, developmental stage, cognitive capacity, emotional and social strengths, and family dynamics can influence a child's level of involvement in discussions about their condition and care. Although there are factors that appear more consistent across the range of pediatric palliative care users, such as parents' preferences for being treated by healthcare professionals as partners in making decisions about the care of their child, there is not always such consistency. Nor is it clear whether such findings can be generalized across different cultural contexts. In appraising existing research, this systematic review will therefore consider the relationship between the context of individual studies and their reported findings.The primary aim of this review is to identify, appraise and synthesize existing qualitative evidence of healthcare users' experiences of communicating with healthcare professionals about children who have life-limiting conditions. The review will consider relevant details of these findings, particularly whether factors like age are relevant for understanding particular experiences of communication. An outcome of this review will be the identification of best available qualitative evidence that can be used to inform professional practice, as well

  9. Identification and Assessment of Human Errors in Postgraduate Endodontic Students of Kerman University of Medical Sciences by Using the SHERPA Method

    Directory of Open Access Journals (Sweden)

    Saman Dastaran

    2016-03-01

    Full Text Available Introduction: Human errors are the cause of many accidents, including industrial and medical, therefore finding out an approach for identifying and reducing them is very important. Since no study has been done about human errors in the dental field, this study aimed to identify and assess human errors in postgraduate endodontic students of Kerman University of Medical Sciences by using the SHERPA Method. Methods: This cross-sectional study was performed during year 2014. Data was collected using task observation and interviewing postgraduate endodontic students. Overall, 10 critical tasks, which were most likely to cause harm to patients were determined. Next, Hierarchical Task Analysis (HTA was conducted and human errors in each task were identified by the Systematic Human Error Reduction Prediction Approach (SHERPA technique worksheets. Results: After analyzing the SHERPA worksheets, 90 human errors were identified including (67.7% action errors, (13.3% checking errors, (8.8% selection errors, (5.5% retrieval errors and (4.4% communication errors. As a result, most of them were action errors and less of them were communication errors. Conclusions: The results of the study showed that the highest percentage of errors and the highest level of risk were associated with action errors, therefore, to reduce the occurrence of such errors and limit their consequences, control measures including periodical training of work procedures, providing work check-lists, development of guidelines and establishment of a systematic and standardized reporting system, should be put in place. Regarding the results of this study, the control of recovery errors with the highest percentage of undesirable risk and action errors with the highest frequency of errors should be in the priority of control

  10. Measuring disability: a systematic review of the validity and reliability of the Global Activity Limitations Indicator (GALI).

    Science.gov (United States)

    Van Oyen, Herman; Bogaert, Petronille; Yokota, Renata T C; Berger, Nicolas

    2018-01-01

    GALI or Global Activity Limitation Indicator is a global survey instrument measuring participation restriction. GALI is the measure underlying the European indicator Healthy Life Years (HLY). Gali has a substantial policy use within the EU and its Member States. The objective of current paper is to bring together what is known from published manuscripts on the validity and the reliability of GALI. Following the PRISMA guidelines, two search strategies (PUBMED, Google Scholar) were combined to identify manuscripts published in English with publication date 2000 or beyond. Articles were classified as reliability studies, concurrent or predictive validity studies, in national or international populations. Four cross-sectional studies (of which 2 international) studied how GALI relates to other health measures (concurrent validity). A dose-response effect by GALI severity level on the association with the other health status measures was observed in the national studies. The 2 international studies (SHARE, EHIS) concluded that the odds of reporting participation restriction was higher in subjects with self-reported or observed functional limitations. In SHARE, the size of the Odds Ratio's (ORs) in the different countries was homogeneous, while in EHIS the size of the ORs varied more strongly. For the predictive validity, subjects were followed over time (4 studies of which one international). GALI proved, both in national and international data, to be a consistent predictor of future health outcomes both in terms of mortality and health care expenditure. As predictors of mortality, the two distinct health concepts, self-rated health and GALI, acted independently and complementary of each other. The one reliability study identified reported a sufficient reliability of GALI. GALI as inclusive one question instrument fits all conceptual characteristics specified for a global measure on participation restriction. In none of the studies, included in the review, there was

  11. Error and its meaning in forensic science.

    Science.gov (United States)

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes. © 2013 American Academy of Forensic Sciences.

  12. Comparison between calorimeter and HLNC errors

    International Nuclear Information System (INIS)

    Goldman, A.S.; De Ridder, P.; Laszlo, G.

    1991-01-01

    This paper summarizes an error analysis that compares systematic and random errors of total plutonium mass estimated for high-level neutron coincidence counter (HLNC) and calorimeter measurements. This task was part of an International Atomic Energy Agency (IAEA) study on the comparison of the two instruments to determine if HLNC measurement errors met IAEA standards and if the calorimeter gave ''significantly'' better precision. Our analysis was based on propagation of error models that contained all known sources of errors including uncertainties associated with plutonium isotopic measurements. 5 refs., 2 tabs

  13. Irregular analytical errors in diagnostic testing - a novel concept.

    Science.gov (United States)

    Vogeser, Michael; Seger, Christoph

    2018-02-23

    In laboratory medicine, routine periodic analyses for internal and external quality control measurements interpreted by statistical methods are mandatory for batch clearance. Data analysis of these process-oriented measurements allows for insight into random analytical variation and systematic calibration bias over time. However, in such a setting, any individual sample is not under individual quality control. The quality control measurements act only at the batch level. Quantitative or qualitative data derived for many effects and interferences associated with an individual diagnostic sample can compromise any analyte. It is obvious that a process for a quality-control-sample-based approach of quality assurance is not sensitive to such errors. To address the potential causes and nature of such analytical interference in individual samples more systematically, we suggest the introduction of a new term called the irregular (individual) analytical error. Practically, this term can be applied in any analytical assay that is traceable to a reference measurement system. For an individual sample an irregular analytical error is defined as an inaccuracy (which is the deviation from a reference measurement procedure result) of a test result that is so high it cannot be explained by measurement uncertainty of the utilized routine assay operating within the accepted limitations of the associated process quality control measurements. The deviation can be defined as the linear combination of the process measurement uncertainty and the method bias for the reference measurement system. Such errors should be coined irregular analytical errors of the individual sample. The measurement result is compromised either by an irregular effect associated with the individual composition (matrix) of the sample or an individual single sample associated processing error in the analytical process. Currently, the availability of reference measurement procedures is still highly limited, but LC

  14. Phase Error Modeling and Its Impact on Precise Orbit Determination of GRACE Satellites

    Directory of Open Access Journals (Sweden)

    Jia Tu

    2012-01-01

    Full Text Available Limiting factors for the precise orbit determination (POD of low-earth orbit (LEO satellite using dual-frequency GPS are nowadays mainly encountered with the in-flight phase error modeling. The phase error is modeled as a systematic and a random component each depending on the direction of GPS signal reception. The systematic part and standard deviation of random part in phase error model are, respectively, estimated by bin-wise mean and standard deviation values of phase postfit residuals computed by orbit determination. By removing the systematic component and adjusting the weight of phase observation data according to standard deviation of random component, the orbit can be further improved by POD approach. The GRACE data of 1–31 January 2006 are processed, and three types of orbit solutions, POD without phase error model correction, POD with mean value correction of phase error model, and POD with phase error model correction, are obtained. The three-dimensional (3D orbit improvements derived from phase error model correction are 0.0153 m for GRACE A and 0.0131 m for GRACE B, and the 3D influences arisen from random part of phase error model are 0.0068 m and 0.0075 m for GRACE A and GRACE B, respectively. Thus the random part of phase error model cannot be neglected for POD. It is also demonstrated by phase postfit residual analysis, orbit comparison with JPL precise science orbit, and orbit validation with KBR data that the results derived from POD with phase error model correction are better than another two types of orbit solutions generated in this paper.

  15. Measurement Error in Education and Growth Regressions

    NARCIS (Netherlands)

    Portela, Miguel; Alessie, Rob; Teulings, Coen

    2010-01-01

    The use of the perpetual inventory method for the construction of education data per country leads to systematic measurement error. This paper analyzes its effect on growth regressions. We suggest a methodology for correcting this error. The standard attenuation bias suggests that using these

  16. Error tracking in a clinical biochemistry laboratory

    DEFF Research Database (Denmark)

    Szecsi, Pal Bela; Ødum, Lars

    2009-01-01

    BACKGROUND: We report our results for the systematic recording of all errors in a standard clinical laboratory over a 1-year period. METHODS: Recording was performed using a commercial database program. All individuals in the laboratory were allowed to report errors. The testing processes were cl...

  17. Understanding human management of automation errors

    Science.gov (United States)

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042

  18. Error calculations statistics in radioactive measurements

    International Nuclear Information System (INIS)

    Verdera, Silvia

    1994-01-01

    Basic approach and procedures frequently used in the practice of radioactive measurements.Statistical principles applied are part of Good radiopharmaceutical Practices and quality assurance.Concept of error, classification as systematic and random errors.Statistic fundamentals,probability theories, populations distributions, Bernoulli, Poisson,Gauss, t-test distribution,Ξ2 test, error propagation based on analysis of variance.Bibliography.z table,t-test table, Poisson index ,Ξ2 test

  19. IMRT QA: Selecting gamma criteria based on error detection sensitivity

    Energy Technology Data Exchange (ETDEWEB)

    Steers, Jennifer M. [Department of Radiation Oncology, Cedars-Sinai Medical Center, Los Angeles, California 90048 and Physics and Biology in Medicine IDP, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California 90095 (United States); Fraass, Benedick A., E-mail: benedick.fraass@cshs.org [Department of Radiation Oncology, Cedars-Sinai Medical Center, Los Angeles, California 90048 (United States)

    2016-04-15

    Purpose: The gamma comparison is widely used to evaluate the agreement between measurements and treatment planning system calculations in patient-specific intensity modulated radiation therapy (IMRT) quality assurance (QA). However, recent publications have raised concerns about the lack of sensitivity when employing commonly used gamma criteria. Understanding the actual sensitivity of a wide range of different gamma criteria may allow the definition of more meaningful gamma criteria and tolerance limits in IMRT QA. We present a method that allows the quantitative determination of gamma criteria sensitivity to induced errors which can be applied to any unique combination of device, delivery technique, and software utilized in a specific clinic. Methods: A total of 21 DMLC IMRT QA measurements (ArcCHECK®, Sun Nuclear) were compared to QA plan calculations with induced errors. Three scenarios were studied: MU errors, multi-leaf collimator (MLC) errors, and the sensitivity of the gamma comparison to changes in penumbra width. Gamma comparisons were performed between measurements and error-induced calculations using a wide range of gamma criteria, resulting in a total of over 20 000 gamma comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using 36 different gamma criteria. Results: This study demonstrates that systematic errors and case-specific errors can be detected by the error curve analysis. Depending on the location of the error curve peak (e.g., not centered about zero), 3%/3 mm threshold = 10% at 90% pixels passing may miss errors as large as 15% MU errors and ±1 cm random MLC errors for some cases. As the dose threshold parameter was increased for a given %Diff/distance-to-agreement (DTA) setting, error sensitivity was increased by up to a factor of two for select cases. This increased sensitivity with increasing dose

  20. A Systematic Approach to Error Free Telemetry

    Science.gov (United States)

    2017-06-28

    FORCE BASE , CALIFORNIA AIR FORCE MATERIEL COMMAND UNITED STATES AIR FORCE 4 1 2 T W This Technical Information Memorandum (412TW-TIM-17-03, A...INTRODUCTION The airborne telemetry channel between the test article and ground receiving station introduces impairments that distort the received signal...the airframe under certain airplane-to-ground station geometries can exist should only one transmit antenna be used. Conversely, using two transmit

  1. Limits on quark-lepton compositeness and studies of W asymmetry at the Tevatron collider

    International Nuclear Information System (INIS)

    Bodek, A.

    1996-10-01

    Drell-Yan dilepton production at high invariant mass place strong limits on quark substructure. Compositeness limits from CDF Run 1, and expected sensitivity in Run II and TEV33 are presented. The W asymmetry data constrains the slope of the d/u quark distributions and significantly reduces the systematic error on the extracted value of the W mass

  2. Correcting AUC for Measurement Error.

    Science.gov (United States)

    Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang

    2015-12-01

    Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.

  3. SU-E-T-114: Analysis of MLC Errors On Gamma Pass Rates for Patient-Specific and Conventional Phantoms

    Energy Technology Data Exchange (ETDEWEB)

    Sterling, D; Ehler, E [University of Minnesota, Minneapolis, MN (United States)

    2015-06-15

    Purpose: To evaluate whether a 3D patient-specific phantom is better able to detect known MLC errors in a clinically delivered treatment plan than conventional phantoms. 3D printing may make fabrication of such phantoms feasible. Methods: Two types of MLC errors were introduced into a clinically delivered, non-coplanar IMRT, partial brain treatment plan. First, uniformly distributed random errors of up to 3mm, 2mm, and 1mm were introduced into the MLC positions for each field. Second, systematic MLC-bank position errors of 5mm, 3.5mm, and 2mm due to simulated effects of gantry and MLC sag were introduced. The original plan was recalculated with these errors on the original CT dataset as well as cylindrical and planar IMRT QA phantoms. The original dataset was considered to be a perfect 3D patient-specific phantom. The phantoms were considered to be ideal 3D dosimetry systems with no resolution limitations. Results: Passing rates for Gamma Index (3%/3mm and no dose threshold) were calculated on the 3D phantom, cylindrical phantom, and both on a composite and field-by-field basis for the planar phantom. Pass rates for 5mm systematic and 3mm random error were 86.0%, 89.6%, 98% and 98.3% respectively. For 3.5mm systematic and 2mm random error the pass rates were 94.7%, 96.2%, 99.2% and 99.2% respectively. For 2mm systematic error with 1mm random error the pass rates were 99.9%, 100%, 100% and 100% respectively. Conclusion: A 3D phantom with the patient anatomy is able to discern errors, both severe and subtle, that are not seen using conventional phantoms. Therefore, 3D phantoms may be beneficial for commissioning new treatment machines and modalities, patient-specific QA and end-to-end testing.

  4. SU-E-T-114: Analysis of MLC Errors On Gamma Pass Rates for Patient-Specific and Conventional Phantoms

    International Nuclear Information System (INIS)

    Sterling, D; Ehler, E

    2015-01-01

    Purpose: To evaluate whether a 3D patient-specific phantom is better able to detect known MLC errors in a clinically delivered treatment plan than conventional phantoms. 3D printing may make fabrication of such phantoms feasible. Methods: Two types of MLC errors were introduced into a clinically delivered, non-coplanar IMRT, partial brain treatment plan. First, uniformly distributed random errors of up to 3mm, 2mm, and 1mm were introduced into the MLC positions for each field. Second, systematic MLC-bank position errors of 5mm, 3.5mm, and 2mm due to simulated effects of gantry and MLC sag were introduced. The original plan was recalculated with these errors on the original CT dataset as well as cylindrical and planar IMRT QA phantoms. The original dataset was considered to be a perfect 3D patient-specific phantom. The phantoms were considered to be ideal 3D dosimetry systems with no resolution limitations. Results: Passing rates for Gamma Index (3%/3mm and no dose threshold) were calculated on the 3D phantom, cylindrical phantom, and both on a composite and field-by-field basis for the planar phantom. Pass rates for 5mm systematic and 3mm random error were 86.0%, 89.6%, 98% and 98.3% respectively. For 3.5mm systematic and 2mm random error the pass rates were 94.7%, 96.2%, 99.2% and 99.2% respectively. For 2mm systematic error with 1mm random error the pass rates were 99.9%, 100%, 100% and 100% respectively. Conclusion: A 3D phantom with the patient anatomy is able to discern errors, both severe and subtle, that are not seen using conventional phantoms. Therefore, 3D phantoms may be beneficial for commissioning new treatment machines and modalities, patient-specific QA and end-to-end testing

  5. Communication and support from health-care professionals to families, with dependent children, following the diagnosis of parental life-limiting illness: A systematic review.

    Science.gov (United States)

    Fearnley, Rachel; Boland, Jason W

    2017-03-01

    Communication between parents and their children about parental life-limiting illness is stressful. Parents want support from health-care professionals; however, the extent of this support is not known. Awareness of family's needs would help ensure appropriate support. To find the current literature exploring (1) how parents with a life-limiting illness, who have dependent children, perceive health-care professionals' communication with them about the illness, diagnosis and treatments, including how social, practical and emotional support is offered to them and (2) how this contributes to the parents' feelings of supporting their children. A systematic literature review and narrative synthesis. Embase, MEDLINE, PsycINFO, CINAHL and ASSIA ProQuest were searched in November 2015 for studies assessing communication between health-care professionals and parents about how to talk with their children about the parent's illness. There were 1342 records identified, five qualitative studies met the inclusion criteria (55 ill parents, 11 spouses/carers, 26 children and 16 health-care professionals). Parents wanted information from health-care professionals about how to talk to their children about the illness; this was not routinely offered. Children also want to talk with a health-care professional about their parents' illness. Health-care professionals are concerned that conversations with parents and their children will be too difficult and time-consuming. Parents with a life-limiting illness want support from their health-care professionals about how to communicate with their children about the illness. Their children look to health-care professionals for information about their parent's illness. Health-care professionals, have an important role but appear reluctant to address these concerns because of fears of insufficient time and expertise.

  6. [Errors in Peruvian medical journals references].

    Science.gov (United States)

    Huamaní, Charles; Pacheco-Romero, José

    2009-01-01

    References are fundamental in our studies; an adequate selection is asimportant as an adequate description. To determine the number of errors in a sample of references found in Peruvian medical journals. We reviewed 515 scientific papers references selected by systematic randomized sampling and corroborated reference information with the original document or its citation in Pubmed, LILACS or SciELO-Peru. We found errors in 47,6% (245) of the references, identifying 372 types of errors; the most frequent were errors in presentation style (120), authorship (100) and title (100), mainly due to spelling mistakes (91). References error percentage was high, varied and multiple. We suggest systematic revision of references in the editorial process as well as to extend the discussion on this theme. references, periodicals, research, bibliometrics.

  7. Vitrectomy with internal limiting membrane peeling versus inverted internal limiting membrane flap technique for macular hole-induced retinal detachment: a systematic review of literature and meta-analysis.

    Science.gov (United States)

    Yuan, Jing; Zhang, Ling-Lin; Lu, Yu-Jie; Han, Meng-Yao; Yu, Ai-Hua; Cai, Xiao-Jun

    2017-11-28

    To evaluate the effects on vitrectomy with internal limiting membrane (ILM) peeling versus vitrectomy with inverted internal limiting membrane flap technique for macular hole-induced retinal detachment (MHRD). Pubmed, Cochrane Library, and Embase were systematically searched for studies that compared ILM peeling with inverted ILM flap technique for macular hole-induced retinal detachment. The primary outcomes are the rate of retinal reattachment and the rate of macular hole closure 6 months later after initial surgery, the secondary outcome is the postoperative best-corrected visual acuity (BCVA) 6 months later after initial surgery. Four studies that included 98 eyes were selected. All the included studies were retrospective comparative studies. The preoperative best-corrected visual acuity was equal between ILM peeling and inverted ILM flap technique groups. It was indicated that the rate of retinal reattachment (odds ratio (OR) = 0.14, 95% confidence interval (CI):0.03 to 0.69; P = 0.02) and macular hole closure (OR = 0.06, 95% CI:0.02 to 0.19; P peeling. However, there was no statistically significant difference in postoperative best-corrected visual acuity (mean difference (MD) 0.18 logarithm of the minimum angle of resolution; 95% CI -0.06 to 0.43 ; P = 0.14) between the two surgery groups. Compared with ILM peeling, vitrectomy with inverted ILM flap technique resulted significantly higher of the rate of retinal reattachment and macular hole closure, but seemed does not improve postoperative best-corrected visual acuity.

  8. Distance error correction for time-of-flight cameras

    Science.gov (United States)

    Fuersattel, Peter; Schaller, Christian; Maier, Andreas; Riess, Christian

    2017-06-01

    The measurement accuracy of time-of-flight cameras is limited due to properties of the scene and systematic errors. These errors can accumulate to multiple centimeters which may limit the applicability of these range sensors. In the past, different approaches have been proposed for improving the accuracy of these cameras. In this work, we propose a new method that improves two important aspects of the range calibration. First, we propose a new checkerboard which is augmented by a gray-level gradient. With this addition it becomes possible to capture the calibration features for intrinsic and distance calibration at the same time. The gradient strip allows to acquire a large amount of distance measurements for different surface reflectivities, which results in more meaningful training data. Second, we present multiple new features which are used as input to a random forest regressor. By using random regression forests, we circumvent the problem of finding an accurate model for the measurement error. During application, a correction value for each individual pixel is estimated with the trained forest based on a specifically tailored feature vector. With our approach the measurement error can be reduced by more than 40% for the Mesa SR4000 and by more than 30% for the Microsoft Kinect V2. In our evaluation we also investigate the impact of the individual forest parameters and illustrate the importance of the individual features.

  9. How Do Simulated Error Experiences Impact Attitudes Related to Error Prevention?

    Science.gov (United States)

    Breitkreuz, Karen R; Dougal, Renae L; Wright, Melanie C

    2016-10-01

    The objective of this project was to determine whether simulated exposure to error situations changes attitudes in a way that may have a positive impact on error prevention behaviors. Using a stratified quasi-randomized experiment design, we compared risk perception attitudes of a control group of nursing students who received standard error education (reviewed medication error content and watched movies about error experiences) to an experimental group of students who reviewed medication error content and participated in simulated error experiences. Dependent measures included perceived memorability of the educational experience, perceived frequency of errors, and perceived caution with respect to preventing errors. Experienced nursing students perceived the simulated error experiences to be more memorable than movies. Less experienced students perceived both simulated error experiences and movies to be highly memorable. After the intervention, compared with movie participants, simulation participants believed errors occurred more frequently. Both types of education increased the participants' intentions to be more cautious and reported caution remained higher than baseline for medication errors 6 months after the intervention. This study provides limited evidence of an advantage of simulation over watching movies describing actual errors with respect to manipulating attitudes related to error prevention. Both interventions resulted in long-term impacts on perceived caution in medication administration. Simulated error experiences made participants more aware of how easily errors can occur, and the movie education made participants more aware of the devastating consequences of errors.

  10. Learning from prescribing errors

    OpenAIRE

    Dean, B

    2002-01-01

    

 The importance of learning from medical error has recently received increasing emphasis. This paper focuses on prescribing errors and argues that, while learning from prescribing errors is a laudable goal, there are currently barriers that can prevent this occurring. Learning from errors can take place on an individual level, at a team level, and across an organisation. Barriers to learning from prescribing errors include the non-discovery of many prescribing errors, lack of feedback to th...

  11. Error Mitigation in Computational Design of Sustainable Energy Materials

    DEFF Research Database (Denmark)

    Christensen, Rune

    by individual C=O bonds. Energy corrections applied to C=O bonds significantly reduce systematic errors and can be extended to adsorbates. A similar study is performed for intermediates in the oxygen evolution and oxygen reduction reactions. An identified systematic error on peroxide bonds is found to also...... be present in the OOH* adsorbate. However, the systematic error will almost be canceled by inclusion of van der Waals energy. The energy difference between key adsorbates is thus similar to that previously found. Finally, a method is developed for error estimation in computationally inexpensive neural...

  12. Training in time-limited dynamic psychotherapy: A systematic comparison of pre- and post-training cases treated by one therapist.

    Science.gov (United States)

    Anderson, Timothy; Strupp, Hans H

    2015-01-01

    This qualitative study systematically compared cases treated by the same therapist in order to understand the group comparison findings of a larger study on training of experienced therapists (the "Vanderbilt II" psychotherapy project). The therapist, Dr C., was selected based on the therapist's overall treatment successes. His two patients were selected based on their outcomes and the relative training cohort from which they were drawn: a case with successful outcome from the pre-training cohort and a case of negligible improvement from the post-training cohort. Dr C. demonstrated a variety of interpersonal skills throughout his pre-training case, though there was also poor interpersonal process throughout. However, in the second case he had considerable difficulty in adapting his typical therapeutic approach to the requirements of the time-limited dynamic psychotherapy (TLDP) manual, even while appearing to work hard to find ways to use the manual. Dr C.'s spontaneity, and his unique set of interpersonal skills may enhanced his initial rapport and alliance building with clients and yet may not have interfaced well with TLDP. His unique interpersonal skills also may have contributed to problems of interpersonal process. Future research may benefit from examining the interaction of between therapist interpersonal skills and the implementation of the treatment manual.

  13. Poor quality of external validity reporting limits generalizability of overweight and/or obesity lifestyle prevention interventions in young adults: a systematic review.

    Science.gov (United States)

    Partridge, S R; Juan, S J-H; McGeechan, K; Bauman, A; Allman-Farinelli, M

    2015-01-01

    Young adulthood is a high-risk life stage for weight gain. Evidence is needed to translate behavioural approaches into community practice to prevent weight gain in young adults. This systematic review assessed the effectiveness and reporting of external validity components in prevention interventions. The search was limited to randomized controlled trial (RCT) lifestyle interventions for the prevention of weight gain in young adults (18-35 years). Mean body weight and/or body mass index (BMI) change were the primary outcomes. External validity, quality assessment and risk of bias tools were applied to all studies. Twenty-one RCTs were identified through 14 major electronic databases. Over half of the studies were effective in the short term for significantly reducing body weight and/or BMI; however, few showed long-term maintenance. All studies lacked full reporting on external validity components. Description of the intervention components and participant attrition rates were reported by most studies. However, few studies reported the representativeness of participants, effectiveness of recruitment methods, process evaluation detail or costs. It is unclear from the information reported how to implement the interventions into community practice. Integrated reporting of intervention effectiveness and enhanced reporting of external validity components are needed for the translation and potential upscale of prevention strategies. © 2014 World Obesity.

  14. Evaluation of measurement precision errors at different bone density values

    International Nuclear Information System (INIS)

    Wilson, M.; Wong, J.; Bartlett, M.; Lee, N.

    2002-01-01

    Full text: The precision error commonly used in serial monitoring of BMD values using Dual Energy X Ray Absorptometry (DEXA) is 0.01-0.015g/cm - for both the L2 L4 lumbar spine and total femur. However, this limit is based on normal individuals with bone densities similar to the population mean. The purpose of this study was to systematically evaluate precision errors over the range of bone density values encountered in clinical practice. In 96 patients a BMD scan of the spine and femur was immediately repeated by the same technologist with the patient taken off the bed and repositioned between scans. Nine technologists participated. Values were obtained for the total femur and spine. Each value was classified as low range (0.75-1.05 g/cm ) and medium range (1.05- 1.35g/cm ) for the spine, low range (0.55 0. 85 g/cm ) and medium range (0.85-1.15 g/cm ) for the total femur. Results show that the precision error was significantly lower in the medium range for total femur results with the medium range value at 0.015 g/cm - and the low range at 0.025 g/cm - (p<0.01). No significant difference was found for the spine results. We also analysed precision errors between three technologists and found a significant difference (p=0.05) occurred between only two technologists and this was seen in the spine data only. We conclude that there is some evidence that the precision error increases at the outer limits of the normal bone density range. Also, the results show that having multiple trained operators does not greatly increase the BMD precision error. Copyright (2002) The Australian and New Zealand Society of Nuclear Medicine Inc

  15. Eddington-limited X-Ray Bursts as Distance Indicators. I. Systematic Trends and Spherical Symmetry in Bursts from 4U 1728-34

    Science.gov (United States)

    Galloway, Duncan K.; Psaltis, Dimitrios; Chakrabarty, Deepto; Muno, Michael P.

    2003-06-01

    We investigate the limitations of thermonuclear X-ray bursts as a distance indicator for the weakly magnetized accreting neutron star 4U 1728-34. We measured the unabsorbed peak flux of 81 bursts in public data from the Rossi X-Ray Timing Explorer (RXTE). The distribution of peak fluxes was bimodal: 66 bursts exhibited photospheric radius expansion (presumably reaching the local Eddington limit) and were distributed about a mean bolometric flux of 9.2×10-8ergscm-2s-1, while the remaining (non-radius expansion) bursts reached 4.5×10-8ergscm-2s-1, on average. The peak fluxes of the radius expansion bursts were not constant, exhibiting a standard deviation of 9.4% and a total variation of 46%. These bursts showed significant correlations between their peak flux and the X-ray colors of the persistent emission immediately prior to the burst. We also found evidence for quasi-periodic variation of the peak fluxes of radius expansion bursts, with a timescale of ~=40 days. The persistent flux observed with RXTE/ASM over 5.8 yr exhibited quasi-periodic variability on a similar timescale. We suggest that these variations may have a common origin in reflection from a warped accretion disk. Once the systematic variation of the peak burst fluxes is subtracted, the residual scatter is only ~=3%, roughly consistent with the measurement uncertainties. The narrowness of this distribution strongly suggests that (1) the radiation from the neutron star atmosphere during radius expansion episodes is nearly spherically symmetric and (2) the radius expansion bursts reach a common peak flux that may be interpreted as a standard candle intensity. Adopting the minimum peak flux for the radius expansion bursts as the Eddington flux limit, we derive a distance for the source of 4.4-4.8 kpc (assuming RNS=10 km), with the uncertainty arising from the probable range of the neutron star mass MNS=1.4-2 Msolar.

  16. Analysis of errors in forensic science

    Directory of Open Access Journals (Sweden)

    Mingxiao Du

    2017-01-01

    Full Text Available Reliability of expert testimony is one of the foundations of judicial justice. Both expert bias and scientific errors affect the reliability of expert opinion, which in turn affects the trustworthiness of the findings of fact in legal proceedings. Expert bias can be eliminated by replacing experts; however, it may be more difficult to eliminate scientific errors. From the perspective of statistics, errors in operation of forensic science include systematic errors, random errors, and gross errors. In general, process repetition and abiding by the standard ISO/IEC:17025: 2005, general requirements for the competence of testing and calibration laboratories, during operation are common measures used to reduce errors that originate from experts and equipment, respectively. For example, to reduce gross errors, the laboratory can ensure that a test is repeated several times by different experts. In applying for forensic principles and methods, the Federal Rules of Evidence 702 mandate that judges consider factors such as peer review, to ensure the reliability of the expert testimony. As the scientific principles and methods may not undergo professional review by specialists in a certain field, peer review serves as an exclusive standard. This study also examines two types of statistical errors. As false-positive errors involve a higher possibility of an unfair decision-making, they should receive more attention than false-negative errors.

  17. Error field considerations for BPX

    International Nuclear Information System (INIS)

    LaHaye, R.J.

    1992-01-01

    Irregularities in the position of poloidal and/or toroidal field coils in tokamaks produce resonant toroidal asymmetries in the vacuum magnetic fields. Otherwise stable tokamak discharges become non-linearly unstable to disruptive locked modes when subjected to low level error fields. Because of the field errors, magnetic islands are produced which would not otherwise occur in tearing mode table configurations; a concomitant reduction of the total confinement can result. Poloidal and toroidal asymmetries arise in the heat flux to the divertor target. In this paper, the field errors from perturbed BPX coils are used in a field line tracing code of the BPX equilibrium to study these deleterious effects. Limits on coil irregularities for device design and fabrication are computed along with possible correcting coils for reducing such field errors

  18. Two-dimensional errors

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements

  19. Part two: Error propagation

    International Nuclear Information System (INIS)

    Picard, R.R.

    1989-01-01

    Topics covered in this chapter include a discussion of exact results as related to nuclear materials management and accounting in nuclear facilities; propagation of error for a single measured value; propagation of error for several measured values; error propagation for materials balances; and an application of error propagation to an example of uranium hexafluoride conversion process

  20. Learning from Errors

    OpenAIRE

    Martínez-Legaz, Juan Enrique; Soubeyran, Antoine

    2003-01-01

    We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.

  1. An Error Analysis on TFL Learners’ Writings

    Directory of Open Access Journals (Sweden)

    Arif ÇERÇİ

    2016-12-01

    Full Text Available The main purpose of the present study is to identify and represent TFL learners’ writing errors through error analysis. All the learners started learning Turkish as foreign language with A1 (beginner level and completed the process by taking C1 (advanced certificate in TÖMER at Gaziantep University. The data of the present study were collected from 14 students’ writings in proficiency exams for each level. The data were grouped as grammatical, syntactic, spelling, punctuation, and word choice errors. The ratio and categorical distributions of identified errors were analyzed through error analysis. The data were analyzed through statistical procedures in an effort to determine whether error types differ according to the levels of the students. The errors in this study are limited to the linguistic and intralingual developmental errors

  2. The influence of systematic pulse-limited physical exercise on the parameters of the cardiovascular system in patients over 65 years of age.

    Science.gov (United States)

    Chomiuk, Tomasz; Folga, Andrzej; Mamcarz, Artur

    2013-04-20

    The influence of physical exercise on the parameters of the cardiovascular system of elderly persons has not been sufficiently investigated yet. The aim of the study was to assess the influence of regular 6-week physical exercise using the Nordic walking (NW) method in a group of elderly persons on their physical performance and regulation of selected parameters assessing the cardiovascular system. Fifty patients over 65 years of age participated in the study. The study encompassed: medical interview, physical examination, resting ECG, spiroergometry examination, 6MWT (6-minute walk test) and 24-hour ambulatory blood pressure monitoring (ABPM). During the exercise programme, the pulse was monitored using pulsometers. After the completion of the training, check-up tests assessing the same parameters were performed. The control group consisted of 18 persons over 65 years of age with similar cardiovascular problems. In the test group, duration of the physical effort increased by 1.02 min (p = 0.0001), the maximum load increased by 10.68 W (p = 0.0001), values of VO2max by 2.10 (p = 0.0218), distance improved in 6MWT by 75.04 m (p = 0.00001), systolic blood pressure decreased by 5.50 mm Hg (p = 0.035) and diastolic blood pressure by 3.50 mm Hg (p = 0.054) as compared to the control group. Systematic NW physical exercise limited by the pulse had a beneficial effect on the physical performance of elderly persons as assessed with main parameters. A short 6-week programme of endurance exercises had a hypotensive effect in elderly persons over 65 years of age.

  3. A Systematic Review of Non-Traumatic Spinal Cord Injuries in Sub-Saharan Africa and a Proposed Diagnostic Algorithm for Resource-Limited Settings

    Directory of Open Access Journals (Sweden)

    Abdu Kisekka Musubire

    2017-12-01

    Full Text Available BackgroundNon-traumatic myelopathy is common in Africa and there are geographic differences in etiology. Clinical management is challenging due to the broad differential diagnosis and the lack of diagnostics. The objective of this systematic review is to determine the most common etiologies of non-traumatic myelopathy in sub-Saharan Africa to inform a regionally appropriate diagnostic algorithm.MethodsWe conducted a systemic review searching Medline and Embase databases using the following search terms: “Non traumatic spinal cord injury” or “myelopathy” with limitations to epidemiology or etiologies and Sub-Saharan Africa. We described the frequencies of the different etiologies and proposed a diagnostic algorithm based on the most common diagnoses.ResultsWe identified 19 studies all performed at tertiary institutions; 15 were retrospective and 13 were published in the era of the HIV epidemic. Compressive bone lesions accounted for more than 48% of the cases; a majority were Pott’s disease and metastatic disease. No diagnosis was identified in up to 30% of cases in most studies; in particular, definitive diagnoses of non-compressive lesions were rare and a majority were clinical diagnoses of transverse myelitis and HIV myelopathy. Age and HIV were major determinants of etiology.ConclusionCompressive myelopathies represent a majority of non-traumatic myelopathies in sub-Saharan Africa, and most were due to Pott’s disease. Non-compressive myelopathies have not been well defined and need further research in Africa. We recommend a standardized approach to management of non-traumatic myelopathy focused on identifying treatable conditions with tests widely available in low-resource settings.

  4. EFFECTS OF INTERNAL LIMITING MEMBRANE PEELING COMBINED WITH REMOVAL OF IDIOPATHIC EPIRETINAL MEMBRANE: A Systematic Review of Literature and Meta-Analysis.

    Science.gov (United States)

    Azuma, Kunihiro; Ueta, Takashi; Eguchi, Shuichiro; Aihara, Makoto

    2017-10-01

    To evaluate the effects on postoperative prognosis of internal limiting membrane (ILM) peeling in conjunction with removal of idiopathic epiretinal membranes (ERMs). MEDLINE, Cochrane Central Register of Controlled Trials (CENTRAL), and EMBASE were systematically searched for studies that compared ILM peeling with no ILM peeling in surgery to remove idiopathic ERM. Outcome measures were best-corrected visual acuity, central macular thickness, and ERM recurrence. Studies that compared ILM peeling with no ILM peeling for the treatment of idiopathic ERM were selected. Sixteen studies that included 1,286 eyes were selected. All the included studies were retrospective or prospective comparative studies; no randomized controlled study was identified. Baseline preoperative best-corrected visual acuity and central macular thickness were equal between ILM peeling and no ILM peeling groups. Postoperatively, there was no statistically significant difference in best-corrected visual acuity (mean difference 0.01 logarithm of the minimum angle of resolution [equivalent to 0.5 Early Treatment Diabetic Retinopathy Study letter]; 95% CI -0.05 to 0.07 [-3.5 to 2.5 Early Treatment Diabetic Retinopathy Study letters]; P = 0.83) or central macular thickness (mean difference 13.13 μm; 95% CI -10.66 to 36.93; P = 0.28). However, the recurrence rate of ERM was significantly lower with ILM peeling than with no ILM peeling (odds ratio 0.25; 95% CI 0.12-0.49; P peeling in vitrectomy for idiopathic ERM could result in a significantly lower ERM recurrence rate, but it does not significantly influence postoperative best-corrected visual acuity and central macular thickness.

  5. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  6. The use of adaptive radiation therapy to reduce setup error: a prospective clinical study

    International Nuclear Information System (INIS)

    Yan Di; Wong, John; Vicini, Frank; Robertson, John; Horwitz, Eric; Brabbins, Donald; Cook, Carla; Gustafson, Gary; Stromberg, Jannifer; Martinez, Alvaro

    1996-01-01

    Purpose: Adaptive Radiation Therapy (ART) is a closed-loop feedback process where each patients treatment is adaptively optimized according to the individual variation information measured during the course of treatment. The process aims to maximize the benefits of treatment for the individual patient. A prospective study is currently being conducted to test the feasibility and effectiveness of ART for clinical use. The present study is limited to compensating the effects of systematic setup error. Methods and Materials: The study includes 20 patients treated on a linear accelerator equipped with a computer controlled multileaf collimator (MLC) and a electronic portal imaging device (EPID). Alpha cradles are used to immobilize those patients treated for disease in the thoracic and abdominal regions, and thermal plastic masks for the head and neck. Portal images are acquired daily. Setup error of each treatment field is quantified off-line every day. As determined from an earlier retrospective study of different clinical sites, the measured setup variation from the first 4 to 9 days, are used to estimate systematic setup error and the standard deviation of random setup error for each field. Setup adjustment is made if estimated systematic setup error of the treatment field was larger than or equal to 2 mm. Instead of the conventional approach of repositioning the patient, setup correction is implemented by reshaping MLC to compensate for the estimated systematic error. The entire process from analysis of portal images to the implementation of the modified MLC field is performed via computer network. Systematic and random setup errors of the treatment after adjustment are compared with those prior to adjustment. Finally, the frequency distributions of block overlap cumulated throughout the treatment course are evaluated. Results: Sixty-seven percent of all treatment fields were reshaped to compensate for the estimated systematic errors. At the time of this writing

  7. Applying Intelligent Algorithms to Automate the Identification of Error Factors.

    Science.gov (United States)

    Jin, Haizhe; Qu, Qingxing; Munechika, Masahiko; Sano, Masataka; Kajihara, Chisato; Duffy, Vincent G; Chen, Han

    2018-05-03

    Medical errors are the manifestation of the defects occurring in medical processes. Extracting and identifying defects as medical error factors from these processes are an effective approach to prevent medical errors. However, it is a difficult and time-consuming task and requires an analyst with a professional medical background. The issues of identifying a method to extract medical error factors and reduce the extraction difficulty need to be resolved. In this research, a systematic methodology to extract and identify error factors in the medical administration process was proposed. The design of the error report, extraction of the error factors, and identification of the error factors were analyzed. Based on 624 medical error cases across four medical institutes in both Japan and China, 19 error-related items and their levels were extracted. After which, they were closely related to 12 error factors. The relational model between the error-related items and error factors was established based on a genetic algorithm (GA)-back-propagation neural network (BPNN) model. Additionally, compared to GA-BPNN, BPNN, partial least squares regression and support vector regression, GA-BPNN exhibited a higher overall prediction accuracy, being able to promptly identify the error factors from the error-related items. The combination of "error-related items, their different levels, and the GA-BPNN model" was proposed as an error-factor identification technology, which could automatically identify medical error factors.

  8. Analysis of error patterns in clinical radiotherapy

    International Nuclear Information System (INIS)

    Macklis, Roger; Meier, Tim; Barrett, Patricia; Weinhous, Martin

    1996-01-01

    Purpose: Until very recently, prescription errors and adverse treatment events have rarely been studied or reported systematically in oncology. We wished to understand the spectrum and severity of radiotherapy errors that take place on a day-to-day basis in a high-volume academic practice and to understand the resource needs and quality assurance challenges placed on a department by rapid upswings in contract-based clinical volumes requiring additional operating hours, procedures, and personnel. The goal was to define clinical benchmarks for operating safety and to detect error-prone treatment processes that might function as 'early warning' signs. Methods: A multi-tiered prospective and retrospective system for clinical error detection and classification was developed, with formal analysis of the antecedents and consequences of all deviations from prescribed treatment delivery, no matter how trivial. A department-wide record-and-verify system was operational during this period and was used as one method of treatment verification and error detection. Brachytherapy discrepancies were analyzed separately. Results: During the analysis year, over 2000 patients were treated with over 93,000 individual fields. A total of 59 errors affecting a total of 170 individual treated fields were reported or detected during this period. After review, all of these errors were classified as Level 1 (minor discrepancy with essentially no potential for negative clinical implications). This total treatment delivery error rate (170/93, 332 or 0.18%) is significantly better than corresponding error rates reported for other hospital and oncology treatment services, perhaps reflecting the relatively sophisticated error avoidance and detection procedures used in modern clinical radiation oncology. Error rates were independent of linac model and manufacturer, time of day (normal operating hours versus late evening or early morning) or clinical machine volumes. There was some relationship to

  9. [Medical errors: inevitable but preventable].

    Science.gov (United States)

    Giard, R W

    2001-10-27

    Medical errors are increasingly reported in the lay press. Studies have shown dramatic error rates of 10 percent or even higher. From a methodological point of view, studying the frequency and causes of medical errors is far from simple. Clinical decisions on diagnostic or therapeutic interventions are always taken within a clinical context. Reviewing outcomes of interventions without taking into account both the intentions and the arguments for a particular action will limit the conclusions from a study on the rate and preventability of errors. The interpretation of the preventability of medical errors is fraught with difficulties and probably highly subjective. Blaming the doctor personally does not do justice to the actual situation and especially the organisational framework. Attention for and improvement of the organisational aspects of error are far more important then litigating the person. To err is and will remain human and if we want to reduce the incidence of faults we must be able to learn from our mistakes. That requires an open attitude towards medical mistakes, a continuous effort in their detection, a sound analysis and, where feasible, the institution of preventive measures.

  10. Learner Corpora without Error Tagging

    Directory of Open Access Journals (Sweden)

    Rastelli, Stefano

    2009-01-01

    Full Text Available The article explores the possibility of adopting a form-to-function perspective when annotating learner corpora in order to get deeper insights about systematic features of interlanguage. A split between forms and functions (or categories is desirable in order to avoid the "comparative fallacy" and because – especially in basic varieties – forms may precede functions (e.g., what resembles to a "noun" might have a different function or a function may show up in unexpected forms. In the computer-aided error analysis tradition, all items produced by learners are traced to a grid of error tags which is based on the categories of the target language. Differently, we believe it is possible to record and make retrievable both words and sequence of characters independently from their functional-grammatical label in the target language. For this purpose at the University of Pavia we adapted a probabilistic POS tagger designed for L1 on L2 data. Despite the criticism that this operation can raise, we found that it is better to work with "virtual categories" rather than with errors. The article outlines the theoretical background of the project and shows some examples in which some potential of SLA-oriented (non error-based tagging will be possibly made clearer.

  11. Search, Memory, and Choice Error: An Experiment.

    Science.gov (United States)

    Sanjurjo, Adam

    2015-01-01

    Multiple attribute search is a central feature of economic life: we consider much more than price when purchasing a home, and more than wage when choosing a job. An experiment is conducted in order to explore the effects of cognitive limitations on choice in these rich settings, in accordance with the predictions of a new model of search memory load. In each task, subjects are made to search the same information in one of two orders, which differ in predicted memory load. Despite standard models of choice treating such variations in order of acquisition as irrelevant, lower predicted memory load search orders are found to lead to substantially fewer choice errors. An implication of the result for search behavior, more generally, is that in order to reduce memory load (thus choice error) a limited memory searcher ought to deviate from the search path of an unlimited memory searcher in predictable ways-a mechanism that can explain the systematic deviations from optimal sequential search that have recently been discovered in peoples' behavior. Further, as cognitive load is induced endogenously (within the task), and found to affect choice behavior, this result contributes to the cognitive load literature (in which load is induced exogenously), as well as the cognitive ability literature (in which cognitive ability is measured in a separate task). In addition, while the information overload literature has focused on the detrimental effects of the quantity of information on choice, this result suggests that, holding quantity constant, the order that information is observed in is an essential determinant of choice failure.

  12. Search, Memory, and Choice Error: An Experiment.

    Directory of Open Access Journals (Sweden)

    Adam Sanjurjo

    Full Text Available Multiple attribute search is a central feature of economic life: we consider much more than price when purchasing a home, and more than wage when choosing a job. An experiment is conducted in order to explore the effects of cognitive limitations on choice in these rich settings, in accordance with the predictions of a new model of search memory load. In each task, subjects are made to search the same information in one of two orders, which differ in predicted memory load. Despite standard models of choice treating such variations in order of acquisition as irrelevant, lower predicted memory load search orders are found to lead to substantially fewer choice errors. An implication of the result for search behavior, more generally, is that in order to reduce memory load (thus choice error a limited memory searcher ought to deviate from the search path of an unlimited memory searcher in predictable ways-a mechanism that can explain the systematic deviations from optimal sequential search that have recently been discovered in peoples' behavior. Further, as cognitive load is induced endogenously (within the task, and found to affect choice behavior, this result contributes to the cognitive load literature (in which load is induced exogenously, as well as the cognitive ability literature (in which cognitive ability is measured in a separate task. In addition, while the information overload literature has focused on the detrimental effects of the quantity of information on choice, this result suggests that, holding quantity constant, the order that information is observed in is an essential determinant of choice failure.

  13. Notes on human error analysis and prediction

    International Nuclear Information System (INIS)

    Rasmussen, J.

    1978-11-01

    The notes comprise an introductory discussion of the role of human error analysis and prediction in industrial risk analysis. Following this introduction, different classes of human errors and role in industrial systems are mentioned. Problems related to the prediction of human behaviour in reliability and safety analysis are formulated and ''criteria for analyzability'' which must be met by industrial systems so that a systematic analysis can be performed are suggested. The appendices contain illustrative case stories and a review of human error reports for the task of equipment calibration and testing as found in the US Licensee Event Reports. (author)

  14. Analysis of field errors in existing undulators

    International Nuclear Information System (INIS)

    Kincaid, B.M.

    1990-01-01

    The Advanced Light Source (ALS) and other third generation synchrotron light sources have been designed for optimum performance with undulator insertion devices. The performance requirements for these new undulators are explored, with emphasis on the effects of errors on source spectral brightness. Analysis of magnetic field data for several existing hybrid undulators is presented, decomposing errors into systematic and random components. An attempts is made to identify the sources of these errors, and recommendations are made for designing future insertion devices. 12 refs., 16 figs

  15. Field error lottery

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  16. Low Tidal Volume versus Non-Volume-Limited Strategies for Patients with Acute Respiratory Distress Syndrome. A Systematic Review and Meta-Analysis.

    Science.gov (United States)

    Walkey, Allan J; Goligher, Ewan C; Del Sorbo, Lorenzo; Hodgson, Carol L; Adhikari, Neill K J; Wunsch, Hannah; Meade, Maureen O; Uleryk, Elizabeth; Hess, Dean; Talmor, Daniel S; Thompson, B Taylor; Brower, Roy G; Fan, Eddy

    2017-10-01

    Trials investigating use of lower tidal volumes and inspiratory pressures for patients with acute respiratory distress syndrome (ARDS) have shown mixed results. To compare clinical outcomes of mechanical ventilation strategies that limit tidal volumes and inspiratory pressures (LTV) to strategies with tidal volumes of 10 to 15 ml/kg among patients with ARDS. This is a systematic review and meta-analysis of clinical trials investigating LTV mechanical ventilation strategies. We used random effects models to evaluate the effect of LTV on 28-day mortality, organ failure, ventilator-free days, barotrauma, oxygenation, and ventilation. Our primary analysis excluded trials for which the LTV strategy was combined with the additional strategy of higher positive end-expiratory pressure (PEEP), but these trials were included in a stratified sensitivity analysis. We performed metaregression of tidal volume gradient achieved between intervention and control groups on mortality effect estimates. We used Grading of Recommendations Assessment, Development, and Evaluation methodology to determine the quality of evidence. Seven randomized trials involving 1,481 patients met eligibility criteria for this review. Mortality was not significantly lower for patients receiving an LTV strategy (33.6%) as compared with control strategies (40.4%) (relative risk [RR], 0.87; 95% confidence interval [CI], 0.70-1.08; heterogeneity statistic I 2  = 46%), nor did an LTV strategy significantly decrease barotrauma or ventilator-free days when compared with a lower PEEP strategy. Quality of evidence for clinical outcomes was downgraded for imprecision. Metaregression showed a significant inverse association between larger tidal volume gradient between LTV and control groups and log odds ratios for mortality (β, -0.1587; P = 0.0022). Sensitivity analysis including trials that protocolized an LTV/high PEEP cointervention showed lower mortality associated with LTV (nine trials and 1

  17. Limited evidence on persistence with anticoagulants, and its effect on the risk of recurrence of venous thromboembolism: a systematic review of observational studies

    Directory of Open Access Journals (Sweden)

    Vora P

    2016-08-01

    Full Text Available Pareen Vora, Montse Soriano-Gabarró, Kiliana Suzart, Gunnar Persson Brobert Department of Epidemiology, Bayer Pharma AG, Berlin, Germany Purpose: The risk of venous thromboembolism (VTE recurrence is high following an initial VTE event, and it persists over time. This recurrence risk decreases rapidly after starting with anticoagulation treatment and reduces by ~80%–90% with prolonged anticoagulation. Nonpersistence with anticoagulants could lead to increased risk of VTE recurrence. This systematic review aimed to estimate persistence at 3, 6, and 12 months with anticoagulants in patients with VTE, and to evaluate the risk of VTE recurrence in nonpersistent patients.Methods: PubMed and Embase® were searched up to May 3, 2014 and the search results updated to May 31, 2015. Studies involving patients with VTE aged ≥18 years, treatment with anticoagulants intended for at least 3 months or more, and reporting data for persistence were included. Proportions were transformed using Freeman–Tukey double arcsine transformation and pooled using the DerSimonian–Laird random-effects approach.Results: In total, 12 observational studies (7/12 conference abstracts were included in the review. All 12 studies either reported or provided data for persistence. The total number of patients meta-analyzed to estimate persistence at 3, 6, and 12 months was 71,969 patients, 58,940 patients, and 68,235 patients, respectively. The estimated persistence for 3, 6, and 12 months of therapy was 83% (95% confidence interval [CI], 78–87; I2=99.3%, 62% (95% CI, 58–66; I2=98.1%, and 31% (95% CI, 22–40; I2=99.8%, respectively. Only two studies reported the risk of VTE recurrence based on nonpersistence – one at 3 months and the other at 12 months.Conclusion: Limited evidence showed that persistence was suboptimal with an estimated 17% patients being nonpersistent with anticoagulants in the crucial first 3 months. Persistence declined over 6 and 12 months

  18. Heuristics and Cognitive Error in Medical Imaging.

    Science.gov (United States)

    Itri, Jason N; Patel, Sohil H

    2018-05-01

    The field of cognitive science has provided important insights into mental processes underlying the interpretation of imaging examinations. Despite these insights, diagnostic error remains a major obstacle in the goal to improve quality in radiology. In this article, we describe several types of cognitive bias that lead to diagnostic errors in imaging and discuss approaches to mitigate cognitive biases and diagnostic error. Radiologists rely on heuristic principles to reduce complex tasks of assessing probabilities and predicting values into simpler judgmental operations. These mental shortcuts allow rapid problem solving based on assumptions and past experiences. Heuristics used in the interpretation of imaging studies are generally helpful but can sometimes result in cognitive biases that lead to significant errors. An understanding of the causes of cognitive biases can lead to the development of educational content and systematic improvements that mitigate errors and improve the quality of care provided by radiologists.

  19. A Comprehensive Radial Velocity Error Budget for Next Generation Doppler Spectrometers

    Science.gov (United States)

    Halverson, Samuel; Ryan, Terrien; Mahadevan, Suvrath; Roy, Arpita; Bender, Chad; Stefansson, Guomundur Kari; Monson, Andrew; Levi, Eric; Hearty, Fred; Blake, Cullen; hide

    2016-01-01

    We describe a detailed radial velocity error budget for the NASA-NSF Extreme Precision Doppler Spectrometer instrument concept NEID (NN-explore Exoplanet Investigations with Doppler spectroscopy). Such an instrument performance budget is a necessity for both identifying the variety of noise sources currently limiting Doppler measurements, and estimating the achievable performance of next generation exoplanet hunting Doppler spectrometers. For these instruments, no single source of instrumental error is expected to set the overall measurement floor. Rather, the overall instrumental measurement precision is set by the contribution of many individual error sources. We use a combination of numerical simulations, educated estimates based on published materials, extrapolations of physical models, results from laboratory measurements of spectroscopic subsystems, and informed upper limits for a variety of error sources to identify likely sources of systematic error and construct our global instrument performance error budget. While natively focused on the performance of the NEID instrument, this modular performance budget is immediately adaptable to a number of current and future instruments. Such an approach is an important step in charting a path towards improving Doppler measurement precisions to the levels necessary for discovering Earth-like planets.

  20. Practical application of the theory of errors in measurement

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the practical application of the theory of errors in measurement. The topics of the chapter include fixing on a maximum desired error, selecting a maximum error, the procedure for limiting the error, utilizing a standard procedure, setting specifications for a standard procedure, and selecting the number of measurements to be made

  1. Redundant measurements for controlling errors

    International Nuclear Information System (INIS)

    Ehinger, M.H.; Crawford, J.M.; Madeen, M.L.

    1979-07-01

    Current federal regulations for nuclear materials control require consideration of operating data as part of the quality control program and limits of error propagation. Recent work at the BNFP has revealed that operating data are subject to a number of measurement problems which are very difficult to detect and even more difficult to correct in a timely manner. Thus error estimates based on operational data reflect those problems. During the FY 1978 and FY 1979 R and D demonstration runs at the BNFP, redundant measurement techniques were shown to be effective in detecting these problems to allow corrective action. The net effect is a reduction in measurement errors and a significant increase in measurement sensitivity. Results show that normal operation process control measurements, in conjunction with routine accountability measurements, are sensitive problem indicators when incorporated in a redundant measurement program

  2. A qualitative description of human error

    International Nuclear Information System (INIS)

    Li Zhaohuan

    1992-11-01

    The human error has an important contribution to risk of reactor operation. The insight and analytical model are main parts in human reliability analysis. It consists of the concept of human error, the nature, the mechanism of generation, the classification and human performance influence factors. On the operating reactor the human error is defined as the task-human-machine mismatch. The human error event is focused on the erroneous action and the unfavored result. From the time limitation of performing a task, the operation is divided into time-limited and time-opened. The HCR (human cognitive reliability) model is suited for only time-limited. The basic cognitive process consists of the information gathering, cognition/thinking, decision making and action. The human erroneous action may be generated in any stage of this process. The more natural ways to classify human errors are presented. The human performance influence factors including personal, organizational and environmental factors are also listed

  3. A qualitative description of human error

    Energy Technology Data Exchange (ETDEWEB)

    Zhaohuan, Li [Academia Sinica, Beijing, BJ (China). Inst. of Atomic Energy

    1992-11-01

    The human error has an important contribution to risk of reactor operation. The insight and analytical model are main parts in human reliability analysis. It consists of the concept of human error, the nature, the mechanism of generation, the classification and human performance influence factors. On the operating reactor the human error is defined as the task-human-machine mismatch. The human error event is focused on the erroneous action and the unfavored result. From the time limitation of performing a task, the operation is divided into time-limited and time-opened. The HCR (human cognitive reliability) model is suited for only time-limited. The basic cognitive process consists of the information gathering, cognition/thinking, decision making and action. The human erroneous action may be generated in any stage of this process. The more natural ways to classify human errors are presented. The human performance influence factors including personal, organizational and environmental factors are also listed.

  4. Writing errors by adults and by children

    NARCIS (Netherlands)

    Nes, van F.L.

    1984-01-01

    Writing errors are defined as occasional deviations from a person' s normal handwriting; thus they are different from spelling mistakes. The deviations are systematic in nature to a certain degree and can therefore be quantitatively classified in accordance with (1) type and (2) location in a word.

  5. Errors in otology.

    Science.gov (United States)

    Kartush, J M

    1996-11-01

    Practicing medicine successfully requires that errors in diagnosis and treatment be minimized. Malpractice laws encourage litigators to ascribe all medical errors to incompetence and negligence. There are, however, many other causes of unintended outcomes. This article describes common causes of errors and suggests ways to minimize mistakes in otologic practice. Widespread dissemination of knowledge about common errors and their precursors can reduce the incidence of their occurrence. Consequently, laws should be passed to allow for a system of non-punitive, confidential reporting of errors and "near misses" that can be shared by physicians nationwide.

  6. Limited evidence for the effect of sodium fluoride on deterioration of hearing loss in patients with otosclerosis: a systematic review of the literature

    NARCIS (Netherlands)

    Hentschel, M.A.; Huizinga, P.; van der Velden, D.L.; Wegner, I.; Bittermann, A.J.N.; van der Heijden, G.J.M.; Grolman, W.

    2014-01-01

    OBJECTIVE: To determine the protective effect of sodium fluoride on the deterioration of hearing loss in adult patients with otosclerosis. DATA SOURCES: PubMed, Embase, the Cochrane Library, and CINAHL. STUDY SELECTION: A systematic literature search was conducted. Studies reporting original study

  7. The error in total error reduction.

    Science.gov (United States)

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2014-02-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Error estimation and global fitting in transverse-relaxation dispersion experiments to determine chemical-exchange parameters

    International Nuclear Information System (INIS)

    Ishima, Rieko; Torchia, Dennis A.

    2005-01-01

    Off-resonance effects can introduce significant systematic errors in R 2 measurements in constant-time Carr-Purcell-Meiboom-Gill (CPMG) transverse relaxation dispersion experiments. For an off-resonance chemical shift of 500 Hz, 15 N relaxation dispersion profiles obtained from experiment and computer simulation indicated a systematic error of ca. 3%. This error is three- to five-fold larger than the random error in R 2 caused by noise. Good estimates of total R 2 uncertainty are critical in order to obtain accurate estimates in optimized chemical exchange parameters and their uncertainties derived from χ 2 minimization of a target function. Here, we present a simple empirical approach that provides a good estimate of the total error (systematic + random) in 15 N R 2 values measured for the HIV protease. The advantage of this empirical error estimate is that it is applicable even when some of the factors that contribute to the off-resonance error are not known. These errors are incorporated into a χ 2 minimization protocol, in which the Carver-Richards equation is used fit the observed R 2 dispersion profiles, that yields optimized chemical exchange parameters and their confidence limits. Optimized parameters are also derived, using the same protein sample and data-fitting protocol, from 1 H R 2 measurements in which systematic errors are negligible. Although 1 H and 15 N relaxation profiles of individual residues were well fit, the optimized exchange parameters had large uncertainties (confidence limits). In contrast, when a single pair of exchange parameters (the exchange lifetime, τ ex , and the fractional population, p a ), were constrained to globally fit all R 2 profiles for residues in the dimer interface of the protein, confidence limits were less than 8% for all optimized exchange parameters. In addition, F-tests showed that quality of the fits obtained using τ ex , p a as global parameters were not improved when these parameters were free to fit the R

  9. Errors in Neonatology

    OpenAIRE

    Antonio Boldrini; Rosa T. Scaramuzzo; Armando Cuttano

    2013-01-01

    Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy). Results: In Neonatology the main err...

  10. Neurochemical enhancement of conscious error awareness.

    Science.gov (United States)

    Hester, Robert; Nandam, L Sanjay; O'Connell, Redmond G; Wagner, Joe; Strudwick, Mark; Nathan, Pradeep J; Mattingley, Jason B; Bellgrove, Mark A

    2012-02-22

    How the brain monitors ongoing behavior for performance errors is a central question of cognitive neuroscience. Diminished awareness of performance errors limits the extent to which humans engage in corrective behavior and has been linked to loss of insight in a number of psychiatric syndromes (e.g., attention deficit hyperactivity disorder, drug addiction). These conditions share alterations in monoamine signaling that may influence the neural mechanisms underlying error processing, but our understanding of the neurochemical drivers of these processes is limited. We conducted a randomized, double-blind, placebo-controlled, cross-over design of the influence of methylphenidate, atomoxetine, and citalopram on error awareness in 27 healthy participants. The error awareness task, a go/no-go response inhibition paradigm, was administered to assess the influence of monoaminergic agents on performance errors during fMRI data acquisition. A single dose of methylphenidate, but not atomoxetine or citalopram, significantly improved the ability of healthy volunteers to consciously detect performance errors. Furthermore, this behavioral effect was associated with a strengthening of activation differences in the dorsal anterior cingulate cortex and inferior parietal lobe during the methylphenidate condition for errors made with versus without awareness. Our results have implications for the understanding of the neurochemical underpinnings of performance monitoring and for the pharmacological treatment of a range of disparate clinical conditions that are marked by poor awareness of errors.

  11. Medication errors: definitions and classification

    Science.gov (United States)

    Aronson, Jeffrey K

    2009-01-01

    To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey–Lewis method (based on an understanding of theory and practice). A medication error is ‘a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient’. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is ‘a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient’. The converse of this, ‘balanced prescribing’ is ‘the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm’. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. A prescription error is ‘a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription’. The ‘normal features’ include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies. PMID:19594526

  12. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.jp; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro [Department of Radiation Oncology and Image-applied Therapy, Kyoto University, 54 Shogoin-Kawaharacho, Sakyo, Kyoto 606-8507 (Japan)

    2016-09-15

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.

  13. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    International Nuclear Information System (INIS)

    Matsuo, Yukinori; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro

    2016-01-01

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.

  14. Sequential Ensembles Tolerant to Synthetic Aperture Radar (SAR Soil Moisture Retrieval Errors

    Directory of Open Access Journals (Sweden)

    Ju Hyoung Lee

    2016-04-01

    Full Text Available Due to complicated and undefined systematic errors in satellite observation, data assimilation integrating model states with satellite observations is more complicated than field measurements-based data assimilation at a local scale. In the case of Synthetic Aperture Radar (SAR soil moisture, the systematic errors arising from uncertainties in roughness conditions are significant and unavoidable, but current satellite bias correction methods do not resolve the problems very well. Thus, apart from the bias correction process of satellite observation, it is important to assess the inherent capability of satellite data assimilation in such sub-optimal but more realistic observational error conditions. To this end, time-evolving sequential ensembles of the Ensemble Kalman Filter (EnKF is compared with stationary ensemble of the Ensemble Optimal Interpolation (EnOI scheme that does not evolve the ensembles over time. As the sensitivity analysis demonstrated that the surface roughness is more sensitive to the SAR retrievals than measurement errors, it is a scope of this study to monitor how data assimilation alters the effects of roughness on SAR soil moisture retrievals. In results, two data assimilation schemes all provided intermediate values between SAR overestimation, and model underestimation. However, under the same SAR observational error conditions, the sequential ensembles approached a calibrated model showing the lowest Root Mean Square Error (RMSE, while the stationary ensemble converged towards the SAR observations exhibiting the highest RMSE. As compared to stationary ensembles, sequential ensembles have a better tolerance to SAR retrieval errors. Such inherent nature of EnKF suggests an operational merit as a satellite data assimilation system, due to the limitation of bias correction methods currently available.

  15. The Acquisition of Subject-Verb Agreement in Written French: From Novices to Experts' Errors.

    Science.gov (United States)

    Fayol, Michel; Largy, Pierre; Hupet, Michel

    1999-01-01

    Aims at demonstrating the gradual automatization of subject-verb agreement operation in young writers by examining developmental changes in the occurrence of agreement errors. Finds that subjects' performance moved from systematic errors to attraction errors through an intermediate phase. Concludes that attraction errors are a byproduct of the…

  16. Comparing Absolute Error with Squared Error for Evaluating Empirical Models of Continuous Variables: Compositions, Implications, and Consequences

    Science.gov (United States)

    Gao, J.

    2014-12-01

    Reducing modeling error is often a major concern of empirical geophysical models. However, modeling errors can be defined in different ways: When the response variable is continuous, the most commonly used metrics are squared (SQ) and absolute (ABS) errors. For most applications, ABS error is the more natural, but SQ error is mathematically more tractable, so is often used as a substitute with little scientific justification. Existing literature has not thoroughly investigated the implications of using SQ error in place of ABS error, especially not geospatially. This study compares the two metrics through the lens of bias-variance decomposition (BVD). BVD breaks down the expected modeling error of each model evaluation point into bias (systematic error), variance (model sensitivity), and noise (observation instability). It offers a way to probe the composition of various error metrics. I analytically derived the BVD of ABS error and compared it with the well-known SQ error BVD, and found that not only the two metrics measure the characteristics of the probability distributions of modeling errors differently, but also the effects of these characteristics on the overall expected error are different. Most notably, under SQ error all bias, variance, and noise increase expected error, while under ABS error certain parts of the error components reduce expected error. Since manipulating these subtractive terms is a legitimate way to reduce expected modeling error, SQ error can never capture the complete story embedded in ABS error. I then empirically compared the two metrics with a supervised remote sensing model for mapping surface imperviousness. Pair-wise spatially-explicit comparison for each error component showed that SQ error overstates all error components in comparison to ABS error, especially variance-related terms. Hence, substituting ABS error with SQ error makes model performance appear worse than it actually is, and the analyst would more likely accept a

  17. A review of setup error in supine breast radiotherapy using cone-beam computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Batumalai, Vikneswary, E-mail: Vikneswary.batumalai@sswahs.nsw.gov.au [South Western Clinical School, University of New South Wales, Sydney, New South Wales (Australia); Liverpool and Macarthur Cancer Therapy Centres, New South Wales (Australia); Ingham Institute of Applied Medical Research, Sydney, New South Wales (Australia); Holloway, Lois [South Western Clinical School, University of New South Wales, Sydney, New South Wales (Australia); Liverpool and Macarthur Cancer Therapy Centres, New South Wales (Australia); Ingham Institute of Applied Medical Research, Sydney, New South Wales (Australia); Centre for Medical Radiation Physics, University of Wollongong, Wollongong, New South Wales (Australia); Institute of Medical Physics, School of Physics, University of Sydney, Sydney, New South Wales (Australia); Delaney, Geoff P. [South Western Clinical School, University of New South Wales, Sydney, New South Wales (Australia); Liverpool and Macarthur Cancer Therapy Centres, New South Wales (Australia); Ingham Institute of Applied Medical Research, Sydney, New South Wales (Australia)

    2016-10-01

    Setup error in breast radiotherapy (RT) measured with 3-dimensional cone-beam computed tomography (CBCT) is becoming more common. The purpose of this study is to review the literature relating to the magnitude of setup error in breast RT measured with CBCT. The different methods of image registration between CBCT and planning computed tomography (CT) scan were also explored. A literature search, not limited by date, was conducted using Medline and Google Scholar with the following key words: breast cancer, RT, setup error, and CBCT. This review includes studies that reported on systematic and random errors, and the methods used when registering CBCT scans with planning CT scan. A total of 11 relevant studies were identified for inclusion in this review. The average magnitude of error is generally less than 5 mm across a number of studies reviewed. The common registration methods used when registering CBCT scans with planning CT scan are based on bony anatomy, soft tissue, and surgical clips. No clear relationships between the setup errors detected and methods of registration were observed from this review. Further studies are needed to assess the benefit of CBCT over electronic portal image, as CBCT remains unproven to be of wide benefit in breast RT.

  18. A review of setup error in supine breast radiotherapy using cone-beam computed tomography

    International Nuclear Information System (INIS)

    Batumalai, Vikneswary; Holloway, Lois; Delaney, Geoff P.

    2016-01-01

    Setup error in breast radiotherapy (RT) measured with 3-dimensional cone-beam computed tomography (CBCT) is becoming more common. The purpose of this study is to review the literature relating to the magnitude of setup error in breast RT measured with CBCT. The different methods of image registration between CBCT and planning computed tomography (CT) scan were also explored. A literature search, not limited by date, was conducted using Medline and Google Scholar with the following key words: breast cancer, RT, setup error, and CBCT. This review includes studies that reported on systematic and random errors, and the methods used when registering CBCT scans with planning CT scan. A total of 11 relevant studies were identified for inclusion in this review. The average magnitude of error is generally less than 5 mm across a number of studies reviewed. The common registration methods used when registering CBCT scans with planning CT scan are based on bony anatomy, soft tissue, and surgical clips. No clear relationships between the setup errors detected and methods of registration were observed from this review. Further studies are needed to assess the benefit of CBCT over electronic portal image, as CBCT remains unproven to be of wide benefit in breast RT.

  19. PERANCANGAN COMPUTER AIDED SYSTEM DALAM MENGANALISA HUMAN ERROR DI PERKERETAAPIAN INDONESIA

    Directory of Open Access Journals (Sweden)

    Wiwik Budiawan

    2013-06-01

    the occurrence of a train crash in Indonesia. However, it is not clear how this analysis technique is done. Studies of human error made ​​National Transportation Safety Committee (NTSC is still relatively limited, is not equipped with a systematic method. There are several methods that have been developed at this time, but for railway transportation is not widely developed. Human Factors Analysis and Classification System (HFACS is a human error analysis method were developed and adapted to the Indonesian railway system. To improve the reliability of the analysis of human error, HFACS then developed in the form of web-based applications that can be accessed on a computer or smartphone. The results could be used by the NTSC as railway accident analysis methods particularly associated with human error. Keywords: human error, HFACS, CAS, railways

  20. Learning from Errors

    Science.gov (United States)

    Metcalfe, Janet

    2017-01-01

    Although error avoidance during learning appears to be the rule in American classrooms, laboratory studies suggest that it may be a counterproductive strategy, at least for neurologically typical students. Experimental investigations indicate that errorful learning followed by corrective feedback is beneficial to learning. Interestingly, the…

  1. Errors in Viking Lander Atmospheric Profiles Discovered Using MOLA Topography

    Science.gov (United States)

    Withers, Paul; Lorenz, R. D.; Neumann, G. A.

    2002-01-01

    Each Viking lander measured a topographic profile during entry. Comparing to MOLA (Mars Orbiter Laser Altimeter), we find a vertical error of 1-2 km in the Viking trajectory. This introduces a systematic error of 10-20% in the Viking densities and pressures at a given altitude. Additional information is contained in the original extended abstract.

  2. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  3. Model parameter-related optimal perturbations and their contributions to El Niño prediction errors

    Science.gov (United States)

    Tao, Ling-Jiang; Gao, Chuan; Zhang, Rong-Hua

    2018-04-01

    Errors in initial conditions and model parameters (MPs) are the main sources that limit the accuracy of ENSO predictions. In addition to exploring the initial error-induced prediction errors, model errors are equally important in determining prediction performance. In this paper, the MP-related optimal errors that can cause prominent error growth in ENSO predictions are investigated using an intermediate coupled model (ICM) and a conditional nonlinear optimal perturbation (CNOP) approach. Two MPs related to the Bjerknes feedback are considered in the CNOP analysis: one involves the SST-surface wind coupling ({α _τ } ), and the other involves the thermocline effect on the SST ({α _{Te}} ). The MP-related optimal perturbations (denoted as CNOP-P) are found uniformly positive and restrained in a small region: the {α _τ } component is mainly concentrated in the central equatorial Pacific, and the {α _{Te}} component is mainly located in the eastern cold tongue region. This kind of CNOP-P enhances the strength of the Bjerknes feedback and induces an El Niño- or La Niña-like error evolution, resulting in an El Niño-like systematic bias in this model. The CNOP-P is also found to play a role in the spring predictability barrier (SPB) for ENSO predictions. Evidently, such error growth is primarily attributed to MP errors in small areas based on the localized distribution of CNOP-P. Further sensitivity experiments firmly indicate that ENSO simulations are sensitive to the representation of SST-surface wind coupling in the central Pacific and to the thermocline effect in the eastern Pacific in the ICM. These results provide guidance and theoretical support for the future improvement in numerical models to reduce the systematic bias and SPB phenomenon in ENSO predictions.

  4. On the Correspondence between Mean Forecast Errors and Climate Errors in CMIP5 Models

    Energy Technology Data Exchange (ETDEWEB)

    Ma, H. -Y.; Xie, S.; Klein, S. A.; Williams, K. D.; Boyle, J. S.; Bony, S.; Douville, H.; Fermepin, S.; Medeiros, B.; Tyteca, S.; Watanabe, M.; Williamson, D.

    2014-02-01

    The present study examines the correspondence between short- and long-term systematic errors in five atmospheric models by comparing the 16 five-day hindcast ensembles from the Transpose Atmospheric Model Intercomparison Project II (Transpose-AMIP II) for July–August 2009 (short term) to the climate simulations from phase 5 of the Coupled Model Intercomparison Project (CMIP5) and AMIP for the June–August mean conditions of the years of 1979–2008 (long term). Because the short-term hindcasts were conducted with identical climate models used in the CMIP5/AMIP simulations, one can diagnose over what time scale systematic errors in these climate simulations develop, thus yielding insights into their origin through a seamless modeling approach. The analysis suggests that most systematic errors of precipitation, clouds, and radiation processes in the long-term climate runs are present by day 5 in ensemble average hindcasts in all models. Errors typically saturate after few days of hindcasts with amplitudes comparable to the climate errors, and the impacts of initial conditions on the simulated ensemble mean errors are relatively small. This robust bias correspondence suggests that these systematic errors across different models likely are initiated by model parameterizations since the atmospheric large-scale states remain close to observations in the first 2–3 days. However, biases associated with model physics can have impacts on the large-scale states by day 5, such as zonal winds, 2-m temperature, and sea level pressure, and the analysis further indicates a good correspondence between short- and long-term biases for these large-scale states. Therefore, improving individual model parameterizations in the hindcast mode could lead to the improvement of most climate models in simulating their climate mean state and potentially their future projections.

  5. Force Reproduction Error Depends on Force Level, whereas the Position Reproduction Error Does Not

    NARCIS (Netherlands)

    Onneweer, B.; Mugge, W.; Schouten, Alfred Christiaan

    2016-01-01

    When reproducing a previously perceived force or position humans make systematic errors. This study determined the effect of force level on force and position reproduction, when both target and reproduction force are self-generated with the same hand. Subjects performed force reproduction tasks at

  6. Preventing Errors in Laterality

    OpenAIRE

    Landau, Elliot; Hirschorn, David; Koutras, Iakovos; Malek, Alexander; Demissie, Seleshie

    2014-01-01

    An error in laterality is the reporting of a finding that is present on the right side as on the left or vice versa. While different medical and surgical specialties have implemented protocols to help prevent such errors, very few studies have been published that describe these errors in radiology reports and ways to prevent them. We devised a system that allows the radiologist to view reports in a separate window, displayed in a simple font and with all terms of laterality highlighted in sep...

  7. Errors and violations

    International Nuclear Information System (INIS)

    Reason, J.

    1988-01-01

    This paper is in three parts. The first part summarizes the human failures responsible for the Chernobyl disaster and argues that, in considering the human contribution to power plant emergencies, it is necessary to distinguish between: errors and violations; and active and latent failures. The second part presents empirical evidence, drawn from driver behavior, which suggest that errors and violations have different psychological origins. The concluding part outlines a resident pathogen view of accident causation, and seeks to identify the various system pathways along which errors and violations may be propagated

  8. Help prevent hospital errors

    Science.gov (United States)

    ... this page: //medlineplus.gov/ency/patientinstructions/000618.htm Help prevent hospital errors To use the sharing features ... in the hospital. If You Are Having Surgery, Help Keep Yourself Safe Go to a hospital you ...

  9. Pedal Application Errors

    Science.gov (United States)

    2012-03-01

    This project examined the prevalence of pedal application errors and the driver, vehicle, roadway and/or environmental characteristics associated with pedal misapplication crashes based on a literature review, analysis of news media reports, a panel ...

  10. Rounding errors in weighing

    International Nuclear Information System (INIS)

    Jeach, J.L.

    1976-01-01

    When rounding error is large relative to weighing error, it cannot be ignored when estimating scale precision and bias from calibration data. Further, if the data grouping is coarse, rounding error is correlated with weighing error and may also have a mean quite different from zero. These facts are taken into account in a moment estimation method. A copy of the program listing for the MERDA program that provides moment estimates is available from the author. Experience suggests that if the data fall into four or more cells or groups, it is not necessary to apply the moment estimation method. Rather, the estimate given by equation (3) is valid in this instance. 5 tables

  11. Spotting software errors sooner

    International Nuclear Information System (INIS)

    Munro, D.

    1989-01-01

    Static analysis is helping to identify software errors at an earlier stage and more cheaply than conventional methods of testing. RTP Software's MALPAS system also has the ability to check that a code conforms to its original specification. (author)

  12. Errors in energy bills

    International Nuclear Information System (INIS)

    Kop, L.

    2001-01-01

    On request, the Dutch Association for Energy, Environment and Water (VEMW) checks the energy bills for her customers. It appeared that in the year 2000 many small, but also big errors were discovered in the bills of 42 businesses

  13. The surveillance error grid.

    Science.gov (United States)

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  14. Design for Error Tolerance

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1983-01-01

    An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability.......An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability....

  15. Understanding and Confronting Our Mistakes: The Epidemiology of Error in Radiology and Strategies for Error Reduction.

    Science.gov (United States)

    Bruno, Michael A; Walker, Eric A; Abujudeh, Hani H

    2015-10-01

    Arriving at a medical diagnosis is a highly complex process that is extremely error prone. Missed or delayed diagnoses often lead to patient harm and missed opportunities for treatment. Since medical imaging is a major contributor to the overall diagnostic process, it is also a major potential source of diagnostic error. Although some diagnoses may be missed because of the technical or physical limitations of the imaging modality, including image resolution, intrinsic or extrinsic contrast, and signal-to-noise ratio, most missed radiologic diagnoses are attributable to image interpretation errors by radiologists. Radiologic interpretation cannot be mechanized or automated; it is a human enterprise based on complex psychophysiologic and cognitive processes and is itself subject to a wide variety of error types, including perceptual errors (those in which an important abnormality is simply not seen on the images) and cognitive errors (those in which the abnormality is visually detected but the meaning or importance of the finding is not correctly understood or appreciated). The overall prevalence of radiologists' errors in practice does not appear to have changed since it was first estimated in the 1960s. The authors review the epidemiology of errors in diagnostic radiology, including a recently proposed taxonomy of radiologists' errors, as well as research findings, in an attempt to elucidate possible underlying causes of these errors. The authors also propose strategies for error reduction in radiology. On the basis of current understanding, specific suggestions are offered as to how radiologists can improve their performance in practice. © RSNA, 2015.

  16. The Errors of Our Ways: Understanding Error Representations in Cerebellar-Dependent Motor Learning.

    Science.gov (United States)

    Popa, Laurentiu S; Streng, Martha L; Hewitt, Angela L; Ebner, Timothy J

    2016-04-01

    The cerebellum is essential for error-driven motor learning and is strongly implicated in detecting and correcting for motor errors. Therefore, elucidating how motor errors are represented in the cerebellum is essential in understanding cerebellar function, in general, and its role in motor learning, in particular. This review examines how motor errors are encoded in the cerebellar cortex in the context of a forward internal model that generates predictions about the upcoming movement and drives learning and adaptation. In this framework, sensory prediction errors, defined as the discrepancy between the predicted consequences of motor commands and the sensory feedback, are crucial for both on-line movement control and motor learning. While many studies support the dominant view that motor errors are encoded in the complex spike discharge of Purkinje cells, others have failed to relate complex spike activity with errors. Given these limitations, we review recent findings in the monkey showing that complex spike modulation is not necessarily required for motor learning or for simple spike adaptation. Also, new results demonstrate that the simple spike discharge provides continuous error signals that both lead and lag the actual movements in time, suggesting errors are encoded as both an internal prediction of motor commands and the actual sensory feedback. These dual error representations have opposing effects on simple spike discharge, consistent with the signals needed to generate sensory prediction errors used to update a forward internal model.

  17. Apologies and Medical Error

    Science.gov (United States)

    2008-01-01

    One way in which physicians can respond to a medical error is to apologize. Apologies—statements that acknowledge an error and its consequences, take responsibility, and communicate regret for having caused harm—can decrease blame, decrease anger, increase trust, and improve relationships. Importantly, apologies also have the potential to decrease the risk of a medical malpractice lawsuit and can help settle claims by patients. Patients indicate they want and expect explanations and apologies after medical errors and physicians indicate they want to apologize. However, in practice, physicians tend to provide minimal information to patients after medical errors and infrequently offer complete apologies. Although fears about potential litigation are the most commonly cited barrier to apologizing after medical error, the link between litigation risk and the practice of disclosure and apology is tenuous. Other barriers might include the culture of medicine and the inherent psychological difficulties in facing one’s mistakes and apologizing for them. Despite these barriers, incorporating apology into conversations between physicians and patients can address the needs of both parties and can play a role in the effective resolution of disputes related to medical error. PMID:18972177

  18. Error exponents for entanglement concentration

    International Nuclear Information System (INIS)

    Hayashi, Masahito; Koashi, Masato; Matsumoto, Keiji; Morikoshi, Fumiaki; Winter, Andreas

    2003-01-01

    Consider entanglement concentration schemes that convert n identical copies of a pure state into a maximally entangled state of a desired size with success probability being close to one in the asymptotic limit. We give the distillable entanglement, the number of Bell pairs distilled per copy, as a function of an error exponent, which represents the rate of decrease in failure probability as n tends to infinity. The formula fills the gap between the least upper bound of distillable entanglement in probabilistic concentration, which is the well-known entropy of entanglement, and the maximum attained in deterministic concentration. The method of types in information theory enables the detailed analysis of the distillable entanglement in terms of the error rate. In addition to the probabilistic argument, we consider another type of entanglement concentration scheme, where the initial state is deterministically transformed into a (possibly mixed) final state whose fidelity to a maximally entangled state of a desired size converges to one in the asymptotic limit. We show that the same formula as in the probabilistic argument is valid for the argument on fidelity by replacing the success probability with the fidelity. Furthermore, we also discuss entanglement yield when optimal success probability or optimal fidelity converges to zero in the asymptotic limit (strong converse), and give the explicit formulae for those cases

  19. Improving Type Error Messages in OCaml

    Directory of Open Access Journals (Sweden)

    Arthur Charguéraud

    2015-12-01

    Full Text Available Cryptic type error messages are a major obstacle to learning OCaml or other ML-based languages. In many cases, error messages cannot be interpreted without a sufficiently-precise model of the type inference algorithm. The problem of improving type error messages in ML has received quite a bit of attention over the past two decades, and many different strategies have been considered. The challenge is not only to produce error messages that are both sufficiently concise and systematically useful to the programmer, but also to handle a full-blown programming language and to cope with large-sized programs efficiently. In this work, we present a modification to the traditional ML type inference algorithm implemented in OCaml that, by significantly reducing the left-to-right bias, allows us to report error messages that are more helpful to the programmer. Our algorithm remains fully predictable and continues to produce fairly concise error messages that always help making some progress towards fixing the code. We implemented our approach as a patch to the OCaml compiler in just a few hundred lines of code. We believe that this patch should benefit not just to beginners, but also to experienced programs developing large-scale OCaml programs.

  20. Errors from Image Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wood, William Monford [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    Presenting a systematic study of the standard analysis of rod-pinch radiographs for obtaining quantitative measurements of areal mass densities, and making suggestions for improving the methodology of obtaining quantitative information from radiographed objects.

  1. Inference on rare errors using asymptotic expansions and bootstrap calibration

    NARCIS (Netherlands)

    R. Helmers (Roelof)

    1998-01-01

    textabstractThe number of items in error in an audit population is usually quite small, whereas the error distribution is typically highly skewed to the right. For applications in statistical auditing, where line item sampling is appropriate, a new upper confidence limit for the total error amount

  2. Brief report on a systematic review of youth violence prevention through media campaigns: Does the limited yield of strong evidence imply methodological challenges or absence of effect?

    Science.gov (United States)

    Cassidy, Tali; Bowman, Brett; McGrath, Chloe; Matzopoulos, Richard

    2016-10-01

    We present a brief report on a systematic review which identified, assessed and synthesized the existing evidence of the effectiveness of media campaigns in reducing youth violence. Search strategies made use of terms for youth, violence and a range of terms relating to the intervention. An array of academic databases and websites were searched. Although media campaigns to reduce violence are widespread, only six studies met the inclusion criteria. There is little strong evidence to support a direct link between media campaigns and a reduction in youth violence. Several studies measure proxies for violence such as empathy or opinions related to violence, but the link between these measures and violence perpetration is unclear. Nonetheless, some evidence suggests that a targeted and context-specific campaign, especially when combined with other measures, can reduce violence. However, such campaigns are less cost-effective to replicate over large populations than generalised campaigns. It is unclear whether the paucity of evidence represents a null effect or methodological challenges with evaluating media campaigns. Future studies need to be carefully planned to accommodate for methodological difficulties as well as to identify the specific elements of campaigns that work, especially in lower and middle income countries. Copyright © 2016 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  3. Sampling Errors in Monthly Rainfall Totals for TRMM and SSM/I, Based on Statistics of Retrieved Rain Rates and Simple Models

    Science.gov (United States)

    Bell, Thomas L.; Kundu, Prasun K.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Estimates from TRMM satellite data of monthly total rainfall over an area are subject to substantial sampling errors due to the limited number of visits to the area by the satellite during the month. Quantitative comparisons of TRMM averages with data collected by other satellites and by ground-based systems require some estimate of the size of this sampling error. A method of estimating this sampling error based on the actual statistics of the TRMM observations and on some modeling work has been developed. "Sampling error" in TRMM monthly averages is defined here relative to the monthly total a hypothetical satellite permanently stationed above the area would have reported. "Sampling error" therefore includes contributions from the random and systematic errors introduced by the satellite remote sensing system. As part of our long-term goal of providing error estimates for each grid point accessible to the TRMM instruments, sampling error estimates for TRMM based on rain retrievals from TRMM microwave (TMI) data are compared for different times of the year and different oceanic areas (to minimize changes in the statistics due to algorithmic differences over land and ocean). Changes in sampling error estimates due to changes in rain statistics due 1) to evolution of the official algorithms used to process the data, and 2) differences from other remote sensing systems such as the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I), are analyzed.

  4. Clinical efficacy and safety of limited internal fixation combined with external fixation for Pilon fracture: A systematic review and meta-analysis

    OpenAIRE

    Zhang, Shaobo; Zhang, Yibao; Wang, Shenghong; Zhang, Hua; Liu, Peng; Zhang, Wei; Ma, Jing-Lin; Wang, Jing

    2017-01-01

    Purpose: To compare the clinical efficacy and complications of limited internal fixation combined with external fixation (LIFEF) and open reduction and internal fixation (ORIF) in the treatment of Pilon fracture. Methods: We searched databases including Pubmed, Embase, Web of science, Cochrane Library and China Biology Medicine disc for the studies comparing clinical efficacy and complications of LIFEF and ORIF in the treatment of Pilon fracture. The clinical efficacy was evaluated by the ...

  5. Compact disk error measurements

    Science.gov (United States)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  6. Error correction and degeneracy in surface codes suffering loss

    International Nuclear Information System (INIS)

    Stace, Thomas M.; Barrett, Sean D.

    2010-01-01

    Many proposals for quantum information processing are subject to detectable loss errors. In this paper, we give a detailed account of recent results in which we showed that topological quantum memories can simultaneously tolerate both loss errors and computational errors, with a graceful tradeoff between the threshold for each. We further discuss a number of subtleties that arise when implementing error correction on topological memories. We particularly focus on the role played by degeneracy in the matching algorithms and present a systematic study of its effects on thresholds. We also discuss some of the implications of degeneracy for estimating phase transition temperatures in the random bond Ising model.

  7. Isokinetic strength assessment offers limited predictive validity for detecting risk of future hamstring strain in sport: a systematic review and meta-analysis.

    Science.gov (United States)

    Green, Brady; Bourne, Matthew N; Pizzari, Tania

    2018-03-01

    To examine the value of isokinetic strength assessment for predicting risk of hamstring strain injury, and to direct future research into hamstring strain injuries. Systematic review. Database searches for Medline, CINAHL, Embase, AMED, AUSPORT, SPORTDiscus, PEDro and Cochrane Library from inception to April 2017. Manual reference checks, ahead-of-press and citation tracking. Prospective studies evaluating isokinetic hamstrings, quadriceps and hip extensor strength testing as a risk factor for occurrence of hamstring muscle strain. Independent search result screening. Risk of bias assessment by independent reviewers using Quality in Prognosis Studies tool. Best evidence synthesis and meta-analyses of standardised mean difference (SMD). Twelve studies were included, capturing 508 hamstring strain injuries in 2912 athletes. Isokinetic knee flexor, knee extensor and hip extensor outputs were examined at angular velocities ranging 30-300°/s, concentric or eccentric, and relative (Nm/kg) or absolute (Nm) measures. Strength ratios ranged between 30°/s and 300°/s. Meta-analyses revealed a small, significant predictive effect for absolute (SMD=-0.16, P=0.04, 95% CI -0.31 to -0.01) and relative (SMD=-0.17, P=0.03, 95% CI -0.33 to -0.014) eccentric knee flexor strength (60°/s). No other testing speed or strength ratio showed statistical association. Best evidence synthesis found over half of all variables had moderate or strong evidence for no association with future hamstring injury. Despite an isolated finding for eccentric knee flexor strength at slow speeds, the role and application of isokinetic assessment for predicting hamstring strain risk should be reconsidered, particularly given costs and specialised training required. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  8. Manual therapy for the management of pain and limited range of motion in subjects with signs and symptoms of temporomandibular disorder: a systematic review of randomised controlled trials.

    Science.gov (United States)

    Calixtre, L B; Moreira, R F C; Franchini, G H; Alburquerque-Sendín, F; Oliveira, A B

    2015-11-01

    There is a lack of knowledge about the effectiveness of manual therapy (MT) on subjects with temporomandibular disorders (TMD). The aim of this systematic review is to synthetise evidence regarding the isolated effect of MT in improving maximum mouth opening (MMO) and pain in subjects with signs and symptoms of TMD. MEDLINE(®) , Cochrane, Web of Science, SciELO and EMBASE(™) electronic databases were consulted, searching for randomised controlled trials applying MT for TMD compared to other intervention, no intervention or placebo. Two authors independently extracted data, PEDro scale was used to assess risk of bias, and GRADE (Grading of Recommendations Assessment, Development and Evaluation) was applied to synthetise overall quality of the body of evidence. Treatment effect size was calculated for pain, MMO and pressure pain threshold (PPT). Eight trials were included, seven of high methodological quality. Myofascial release and massage techniques applied on the masticatory muscles are more effective than control (low to moderate evidence) but as effective as toxin botulinum injections (moderate evidence). Upper cervical spine thrust manipulation or mobilisation techniques are more effective than control (low to high evidence), while thoracic manipulations are not. There is moderate-to-high evidence that MT techniques protocols are effective. The methodological heterogeneity across trials protocols frequently contributed to decrease quality of evidence. In conclusion, there is widely varying evidence that MT improves pain, MMO and PPT in subjects with TMD signs and symptoms, depending on the technique. Further studies should consider using standardised evaluations and better study designs to strengthen clinical relevance. © 2015 John Wiley & Sons Ltd.

  9. Errors in Neonatology

    Directory of Open Access Journals (Sweden)

    Antonio Boldrini

    2013-06-01

    Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research

  10. LIBERTARISMO & ERROR CATEGORIAL

    Directory of Open Access Journals (Sweden)

    Carlos G. Patarroyo G.

    2009-01-01

    Full Text Available En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibilidad de la libertad humana no necesariamente puede ser acusado de incurrir en ellos.

  11. Libertarismo & Error Categorial

    OpenAIRE

    PATARROYO G, CARLOS G

    2009-01-01

    En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibili...

  12. Error Free Software

    Science.gov (United States)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  13. On the Limitations of Variational Bias Correction

    Science.gov (United States)

    Moradi, Isaac; Mccarty, Will; Gelaro, Ronald

    2018-01-01

    Satellite radiances are the largest dataset assimilated into Numerical Weather Prediction (NWP) models, however the data are subject to errors and uncertainties that need to be accounted for before assimilating into the NWP models. Variational bias correction uses the time series of observation minus background to estimate the observations bias. This technique does not distinguish between the background error, forward operator error, and observations error so that all these errors are summed up together and counted as observation error. We identify some sources of observations errors (e.g., antenna emissivity, non-linearity in the calibration, and antenna pattern) and show the limitations of variational bias corrections on estimating these errors.

  14. A systematic review of air pollution as a risk factor for cardiovascular disease in South Asia: limited evidence from India and Pakistan.

    Science.gov (United States)

    Yamamoto, S S; Phalkey, R; Malik, A A

    2014-03-01

    Cardiovascular diseases (CVD) are major contributors to mortality and morbidity in South Asia. Chronic exposure to air pollution is an important risk factor for cardiovascular diseases, although the majority of studies to date have been conducted in developed countries. Both indoor and outdoor air pollution are growing problems in developing countries in South Asia yet the impact on rising rates of CVD in these regions has largely been ignored. We aimed to assess the evidence available regarding air pollution effects on CVD and CVD risk factors in lower income countries in South Asia. A literature search was conducted in PubMed and Web of Science. Our inclusion criteria included peer-reviewed, original, empirical articles published in English between the years 1990 and 2012, conducted in the World Bank South Asia region (Afghanistan, Bangladesh, Bhutan, India, Maldives, Nepal, Pakistan and Sri Lanka). This resulted in 30 articles. Nine articles met our inclusion criteria and were assessed for this systematic review. Most of the studies were cross-sectional and examined measured particulate matter effects on CVD outcomes and indicators. We observed a bias as nearly all of the studies were from India. Hypertension and CVD deaths were positively associated with higher particulate matter levels. Biomarkers of oxidative stress such as increased levels of P-selection expressing platelets, depleted superoxide dismutase and reactive oxygen species generation as well as elevated levels of inflammatory-related C-reactive protein, interleukin-6 and interleukin-8 were also positively associated with biomass use or elevated particulate matter levels. An important outcome of this investigation was the evidence suggesting important air pollution effects regarding CVD risk in South Asia. However, too few studies have been conducted. There is as an urgent need for longer term investigations using robust measures of air pollution with different population groups that include a wider

  15. Error Correcting Codes

    Indian Academy of Sciences (India)

    Science and Automation at ... the Reed-Solomon code contained 223 bytes of data, (a byte ... then you have a data storage system with error correction, that ..... practical codes, storing such a table is infeasible, as it is generally too large.

  16. Error Correcting Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...

  17. Heat stress and fetal risk. Environmental limits for exercise and passive heat stress during pregnancy: a systematic review with best evidence synthesis.

    Science.gov (United States)

    Ravanelli, Nicholas; Casasola, William; English, Timothy; Edwards, Kate M; Jay, Ollie

    2018-03-01

    Pregnant women are advised to avoid heat stress (eg, excessive exercise and/or heat exposure) due to the risk of teratogenicity associated with maternal hyperthermia; defined as a core temperature (T core ) ≥39.0°C. However, guidelines are ambiguous in terms of critical combinations of climate and activity to avoid and may therefore unnecessarily discourage physical activity during pregnancy. Thus, the primary aim was to assess T core elevations with different characteristics defining exercise and passive heat stress (intensity, mode, ambient conditions, duration) during pregnancy relative to the critical maternal T core of ≥39.0°C. Systematic review with best evidence synthesis. EMBASE, MEDLINE, SCOPUS, CINAHL and Web of Science were searched from inception to 12 July 2017. Studies reporting the T core response of pregnant women, at any period of gestation, to exercise or passive heat stress, were included. 12 studies satisfied our inclusion criteria (n=347). No woman exceeded a T core of 39.0°C. The highest T core was 38.9°C, reported during land-based exercise. The highest mean end-trial T core was 38.3°C (95% CI 37.7°C to 38.9°C) for land-based exercise, 37.5°C (95% CI 37.3°C to 37.7°C) for water immersion exercise, 36.9°C (95% CI 36.8°C to 37.0°C) for hot water bathing and 37.6°C (95% CI 37.5°C to 37.7°C) for sauna exposure. The highest individual core temperature reported was 38.9°C. Immediately after exercise (either land based or water immersion), the highest mean core temperature was 38.3°C; 0.7°C below the proposed teratogenic threshold. Pregnant women can safely engage in: (1) exercise for up to 35 min at 80%-90% of their maximum heart rate in 25°C and 45% relative humidity (RH); (2) water immersion (≤33.4°C) exercise for up to 45 min; and (3) sitting in hot baths (40°C) or hot/dry saunas (70°C; 15% RH) for up to 20 min, irrespective of pregnancy stage, without reaching a core temperature exceeding the teratogenic

  18. Challenge and Error: Critical Events and Attention-Related Errors

    Science.gov (United States)

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  19. Team errors: definition and taxonomy

    International Nuclear Information System (INIS)

    Sasou, Kunihide; Reason, James

    1999-01-01

    In error analysis or error management, the focus is usually upon individuals who have made errors. In large complex systems, however, most people work in teams or groups. Considering this working environment, insufficient emphasis has been given to 'team errors'. This paper discusses the definition of team errors and its taxonomy. These notions are also applied to events that have occurred in the nuclear power industry, aviation industry and shipping industry. The paper also discusses the relations between team errors and Performance Shaping Factors (PSFs). As a result, the proposed definition and taxonomy are found to be useful in categorizing team errors. The analysis also reveals that deficiencies in communication, resource/task management, excessive authority gradient, excessive professional courtesy will cause team errors. Handling human errors as team errors provides an opportunity to reduce human errors

  20. Test models for improving filtering with model errors through stochastic parameter estimation

    International Nuclear Information System (INIS)

    Gershgorin, B.; Harlim, J.; Majda, A.J.

    2010-01-01

    The filtering skill for turbulent signals from nature is often limited by model errors created by utilizing an imperfect model for filtering. Updating the parameters in the imperfect model through stochastic parameter estimation is one way to increase filtering skill and model performance. Here a suite of stringent test models for filtering with stochastic parameter estimation is developed based on the Stochastic Parameterization Extended Kalman Filter (SPEKF). These new SPEKF-algorithms systematically correct both multiplicative and additive biases and involve exact formulas for propagating the mean and covariance including the parameters in the test model. A comprehensive study is presented of robust parameter regimes for increasing filtering skill through stochastic parameter estimation for turbulent signals as the observation time and observation noise are varied and even when the forcing is incorrectly specified. The results here provide useful guidelines for filtering turbulent signals in more complex systems with significant model errors.

  1. Pencil kernel correction and residual error estimation for quality-index-based dose calculations

    International Nuclear Information System (INIS)

    Nyholm, Tufve; Olofsson, Joergen; Ahnesjoe, Anders; Georg, Dietmar; Karlsson, Mikael

    2006-01-01

    Experimental data from 593 photon beams were used to quantify the errors in dose calculations using a previously published pencil kernel model. A correction of the kernel was derived in order to remove the observed systematic errors. The remaining residual error for individual beams was modelled through uncertainty associated with the kernel model. The methods were tested against an independent set of measurements. No significant systematic error was observed in the calculations using the derived correction of the kernel and the remaining random errors were found to be adequately predicted by the proposed method

  2. Limitations of studies on school-based nutrition education interventions for obesity in China: a systematic review and meta-analysis.

    Science.gov (United States)

    Kong, Kaimeng; Liu, Jie; Tao, Yexuan

    2016-01-01

    School-based nutrition education has been widely implemented in recent years to fight the increasing prevalence of childhood obesity in China. A comprehensive literature search was performed using six databases to identify studies of school-based nutrition education interventions in China. The methodological quality and the risk of bias of selected literature were evaluated. Stratified analysis was performed to identify whether different methodologies influenced the estimated effect of the intervention. Seventeen articles were included in the analysis. Several of the included studies had inadequate intervention duration, inappropriate randomization methods, selection bias, unbalanced baseline characteristics between control and intervention groups, and absent sample size calculation. Overall, the studies showed no significant impact of nutrition education on obesity (OR=0.76; 95% CI=0.55-1.05; p=0.09). This can be compared with an OR of 0.68 for interventions aimed at preventing malnutrition and an OR of 0.49 for interventions aimed at preventing iron-deficiency anemia. When studies with unbalanced baseline characteristics between groups and selection bias in the study subjects were excluded, the impact of nutrition education on obesity was significant (OR=0.73; 95% CI=0.55-0.98; p=0.003). An analysis stratified according to the duration of intervention revealed that the intervention was effective only when it lasted for more than 2 years (OR=0.49, 95% CI=0.42-0.58; pnutrition education programs in China have some important limitations that might affect the estimated effectiveness of the intervention.

  3. Clinical efficacy and safety of limited internal fixation combined with external fixation for Pilon fracture: A systematic review and meta-analysis.

    Science.gov (United States)

    Zhang, Shao-Bo; Zhang, Yi-Bao; Wang, Sheng-Hong; Zhang, Hua; Liu, Peng; Zhang, Wei; Ma, Jing-Lin; Wang, Jing

    2017-04-01

    To compare the clinical efficacy and complications of limited internal fixation combined with external fixation (LIFEF) and open reduction and internal fixation (ORIF) in the treatment of Pilon fracture. We searched databases including Pubmed, Embase, Web of science, Cochrane Library and China Biology Medicine disc for the studies comparing clinical efficacy and complications of LIFEF and ORIF in the treatment of Pilon fracture. The clinical efficacy was evaluated by the rate of nonunion, malunion/delayed union and the excellent/good rate assessed by Mazur ankle score. The complications including infections and arthritis symptoms after surgery were also investigated. Nine trials including 498 pilon fractures of 494 patients were identified. The meta-analysis found no significant differences in nonunion rate (RR = 1.60, 95% CI: 0.66 to 3.86, p = 0.30), and the excellent/good rate (RR = 0.95, 95% CI: 0.86 to 1.04, p = 0.28) between LIFEF group and ORIF group. For assessment of infections, there were significant differences in the rate of deep infection (RR = 2.18, 95% CI: 1.34 to 3.55, p = 0.002), and the rate of arthritis (RR = 1.26, 95% CI: 1.03 to 1.53, p = 0.02) between LIFEF group and ORIF group. LIFEF has similar effect as ORIF in the treatment of pilon fractures, however, LIFEF group has significantly higher risk of complications than ORIF group does. So LIFEF is not recommended in the treatment of pilon fracture. Copyright © 2017 Daping Hospital and the Research Institute of Surgery of the Third Military Medical University. Production and hosting by Elsevier B.V. All rights reserved.

  4. Pelvic lymph node dissection during robot-assisted radical prostatectomy: efficacy, limitations, and complications-a systematic review of the literature.

    Science.gov (United States)

    Ploussard, Guillaume; Briganti, Alberto; de la Taille, Alexandre; Haese, Alexander; Heidenreich, Axel; Menon, Mani; Sulser, Tullio; Tewari, Ashutosh K; Eastham, James A

    2014-01-01

    Pelvic lymph node dissection (PLND) in prostate cancer is the most effective method for detecting lymph node metastases. However, a decline in the rate of PLND during radical prostatectomy (RP) has been noted. This is likely the result of prostate cancer stage migration in the prostate-specific antigen-screening era, and the introduction of minimally invasive approaches such as robot-assisted radical prostatectomy (RARP). To assess the efficacy, limitations, and complications of PLND during RARP. A review of the literature was performed using the Medline, Scopus, and Web of Science databases with no restriction of language from January 1990 to December 2012. The literature search used the following terms: prostate cancer, radical prostatectomy, robot-assisted, and lymph node dissection. The median value of nodal yield at PLND during RARP ranged from 3 to 24 nodes. As seen in open and laparoscopic RP series, the lymph node positivity rate increased with the extent of dissection during RARP. Overall, PLND-only related complications are rare. The most frequent complication after PLND is symptomatic pelvic lymphocele, with occurrence ranging from 0% to 8% of cases. The rate of PLND-associated grade 3-4 complications ranged from 0% to 5%. PLND is associated with increased operative time. Available data suggest equivalence of PLND between RARP and other surgical approaches in terms of nodal yield, node positivity, and intraoperative and postoperative complications. PLND during RARP can be performed effectively and safely. The overall number of nodes removed, the likelihood of node positivity, and the types and rates of complications of PLND are similar to pure laparoscopic and open retropubic procedures. Copyright © 2013 European Association of Urology. Published by Elsevier B.V. All rights reserved.

  5. Imagery of Errors in Typing

    Science.gov (United States)

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  6. Correction of refractive errors

    Directory of Open Access Journals (Sweden)

    Vladimir Pfeifer

    2005-10-01

    Full Text Available Background: Spectacles and contact lenses are the most frequently used, the safest and the cheapest way to correct refractive errors. The development of keratorefractive surgery has brought new opportunities for correction of refractive errors in patients who have the need to be less dependent of spectacles or contact lenses. Until recently, RK was the most commonly performed refractive procedure for nearsighted patients.Conclusions: The introduction of excimer laser in refractive surgery has given the new opportunities of remodelling the cornea. The laser energy can be delivered on the stromal surface like in PRK or deeper on the corneal stroma by means of lamellar surgery. In LASIK flap is created with microkeratome in LASEK with ethanol and in epi-LASIK the ultra thin flap is created mechanically.

  7. Error-Free Software

    Science.gov (United States)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  8. Minimum Tracking Error Volatility

    OpenAIRE

    Luca RICCETTI

    2010-01-01

    Investors assign part of their funds to asset managers that are given the task of beating a benchmark. The risk management department usually imposes a maximum value of the tracking error volatility (TEV) in order to keep the risk of the portfolio near to that of the selected benchmark. However, risk management does not establish a rule on TEV which enables us to understand whether the asset manager is really active or not and, in practice, asset managers sometimes follow passively the corres...

  9. Error-correction coding

    Science.gov (United States)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  10. Satellite Photometric Error Determination

    Science.gov (United States)

    2015-10-18

    Satellite Photometric Error Determination Tamara E. Payne, Philip J. Castro, Stephen A. Gregory Applied Optimization 714 East Monument Ave, Suite...advocate the adoption of new techniques based on in-frame photometric calibrations enabled by newly available all-sky star catalogs that contain highly...filter systems will likely be supplanted by the Sloan based filter systems. The Johnson photometric system is a set of filters in the optical

  11. Video Error Correction Using Steganography

    Science.gov (United States)

    Robie, David L.; Mersereau, Russell M.

    2002-12-01

    The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  12. Video Error Correction Using Steganography

    Directory of Open Access Journals (Sweden)

    Robie David L

    2002-01-01

    Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  13. Dose error analysis for a scanned proton beam delivery system

    International Nuclear Information System (INIS)

    Coutrakon, G; Wang, N; Miller, D W; Yang, Y

    2010-01-01

    All particle beam scanning systems are subject to dose delivery errors due to errors in position, energy and intensity of the delivered beam. In addition, finite scan speeds, beam spill non-uniformities, and delays in detector, detector electronics and magnet responses will all contribute errors in delivery. In this paper, we present dose errors for an 8 x 10 x 8 cm 3 target of uniform water equivalent density with 8 cm spread out Bragg peak and a prescribed dose of 2 Gy. Lower doses are also analyzed and presented later in the paper. Beam energy errors and errors due to limitations of scanning system hardware have been included in the analysis. By using Gaussian shaped pencil beams derived from measurements in the research room of the James M Slater Proton Treatment and Research Center at Loma Linda, CA and executing treatment simulations multiple times, statistical dose errors have been calculated in each 2.5 mm cubic voxel in the target. These errors were calculated by delivering multiple treatments to the same volume and calculating the rms variation in delivered dose at each voxel in the target. The variations in dose were the result of random beam delivery errors such as proton energy, spot position and intensity fluctuations. The results show that with reasonable assumptions of random beam delivery errors, the spot scanning technique yielded an rms dose error in each voxel less than 2% or 3% of the 2 Gy prescribed dose. These calculated errors are within acceptable clinical limits for radiation therapy.

  14. Putting a face on medical errors: a patient perspective.

    Science.gov (United States)

    Kooienga, Sarah; Stewart, Valerie T

    2011-01-01

    Knowledge of the patient's perspective on medical error is limited. Research efforts have centered on how best to disclose error and how patients desire to have medical error disclosed. On the basis of a qualitative descriptive component of a mixed method study, a purposive sample of 30 community members told their stories of medical error. Their experiences focused on lack of communication, missed communication, or provider's poor interpersonal style of communication, greatly contrasting with the formal definition of error as failure to follow a set standard of care. For these participants, being a patient was more important than error or how an error is disclosed. The patient's understanding of error must be a key aspect of any quality improvement strategy. © 2010 National Association for Healthcare Quality.

  15. [Sources of error in the European Pharmacopoeia assay of halide salts of organic bases by titration with alkali].

    Science.gov (United States)

    Kószeginé, S H; Ráfliné, R Z; Paál, T; Török, I

    2000-01-01

    A short overview has been given by the authors on the titrimetric assay methods of halide salts of organic bases in the pharmacopoeias of greatest importance. The alternative procedures introduced by the European Pharmacopoeia Commission some years ago to replace the non-aqueous titration with perchloric acid in the presence of mercuric acetate have also been presented and evaluated. The authors investigated the limits of applicability and the sources of systematic errors (bias) of the strongly preferred titration with sodium hydroxide in an alcoholic medium. To assess the bias due to the differences between the results calculated from the two inflexion points of the titration curves and the two real endpoints corresponding to the strong and weak acids, respectively, the mathematical analysis of the titration curve function was carried out. This bias, generally negligible when the pH change near the endpoint of the titration is more than 1 unit, is the function of the concentration, the apparent pK of the analyte and the ionic product of water (ethanol) in the alcohol-water mixtures. Using the validation data gained for the method with the titration of ephedrine hydrochloride the authors analysed the impact of carbon dioxide in the titration medium on the additive and proportional systematic errors of the method. The newly introduced standardisation procedure of the European Pharmacopoeia for the sodium hydroxide titrant to decrease the systematic errors caused by carbon dioxide has also been evaluated.

  16. Error-related brain activity and error awareness in an error classification paradigm.

    Science.gov (United States)

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Detecting errors in micro and trace analysis by using statistics

    DEFF Research Database (Denmark)

    Heydorn, K.

    1993-01-01

    By assigning a standard deviation to each step in an analytical method it is possible to predict the standard deviation of each analytical result obtained by this method. If the actual variability of replicate analytical results agrees with the expected, the analytical method is said...... to be in statistical control. Significant deviations between analytical results from different laboratories reveal the presence of systematic errors, and agreement between different laboratories indicate the absence of systematic errors. This statistical approach, referred to as the analysis of precision, was applied...

  18. Haplotype reconstruction error as a classical misclassification problem: introducing sensitivity and specificity as error measures.

    Directory of Open Access Journals (Sweden)

    Claudia Lamina

    Full Text Available BACKGROUND: Statistically reconstructing haplotypes from single nucleotide polymorphism (SNP genotypes, can lead to falsely classified haplotypes. This can be an issue when interpreting haplotype association results or when selecting subjects with certain haplotypes for subsequent functional studies. It was our aim to quantify haplotype reconstruction error and to provide tools for it. METHODS AND RESULTS: By numerous simulation scenarios, we systematically investigated several error measures, including discrepancy, error rate, and R(2, and introduced the sensitivity and specificity to this context. We exemplified several measures in the KORA study, a large population-based study from Southern Germany. We find that the specificity is slightly reduced only for common haplotypes, while the sensitivity was decreased for some, but not all rare haplotypes. The overall error rate was generally increasing with increasing number of loci, increasing minor allele frequency of SNPs, decreasing correlation between the alleles and increasing ambiguity. CONCLUSIONS: We conclude that, with the analytical approach presented here, haplotype-specific error measures can be computed to gain insight into the haplotype uncertainty. This method provides the information, if a specific risk haplotype can be expected to be reconstructed with rather no or high misclassification and thus on the magnitude of expected bias in association estimates. We also illustrate that sensitivity and specificity separate two dimensions of the haplotype reconstruction error, which completely describe the misclassification matrix and thus provide the prerequisite for methods accounting for misclassification.

  19. The effect of errors in charged particle beams

    International Nuclear Information System (INIS)

    Carey, D.C.

    1987-01-01

    Residual errors in a charged particle optical system determine how well the performance of the system conforms to the theory on which it is based. Mathematically possible optical modes can sometimes be eliminated as requiring precisions not attainable. Other plans may require introduction of means of correction for the occurrence of various errors. Error types include misalignments, magnet fabrication precision limitations, and magnet current regulation errors. A thorough analysis of a beam optical system requires computer simulation of all these effects. A unified scheme for the simulation of errors and their correction is discussed

  20. Diagnostic errors in pediatric radiology

    International Nuclear Information System (INIS)

    Taylor, George A.; Voss, Stephan D.; Melvin, Patrice R.; Graham, Dionne A.

    2011-01-01

    Little information is known about the frequency, types and causes of diagnostic errors in imaging children. Our goals were to describe the patterns and potential etiologies of diagnostic error in our subspecialty. We reviewed 265 cases with clinically significant diagnostic errors identified during a 10-year period. Errors were defined as a diagnosis that was delayed, wrong or missed; they were classified as perceptual, cognitive, system-related or unavoidable; and they were evaluated by imaging modality and level of training of the physician involved. We identified 484 specific errors in the 265 cases reviewed (mean:1.8 errors/case). Most discrepancies involved staff (45.5%). Two hundred fifty-eight individual cognitive errors were identified in 151 cases (mean = 1.7 errors/case). Of these, 83 cases (55%) had additional perceptual or system-related errors. One hundred sixty-five perceptual errors were identified in 165 cases. Of these, 68 cases (41%) also had cognitive or system-related errors. Fifty-four system-related errors were identified in 46 cases (mean = 1.2 errors/case) of which all were multi-factorial. Seven cases were unavoidable. Our study defines a taxonomy of diagnostic errors in a large academic pediatric radiology practice and suggests that most are multi-factorial in etiology. Further study is needed to define effective strategies for improvement. (orig.)

  1. Systematic optimization of multiplex zymography protocol to detect active cathepsins K, L, S, and V in healthy and diseased tissue: compromise among limits of detection, reduced time, and resources.

    Science.gov (United States)

    Dumas, Jerald E; Platt, Manu O

    2013-07-01

    Cysteine cathepsins are a family of proteases identified in cancer, atherosclerosis, osteoporosis, arthritis, and a number of other diseases. As this number continues to rise, so does the need for low cost, broad use quantitative assays to detect their activity and can be translated to the clinic in the hospital or in low resource settings. Multiplex cathepsin zymography is one such assay that detects subnanomolar levels of active cathepsins K, L, S, and V in cell or tissue preparations observed as clear bands of proteolytic activity after gelatin substrate SDS-PAGE with conditions optimal for cathepsin renaturing and activity. Densitometric analysis of the zymogram provides quantitative information from this low cost assay. After systematic modifications to optimize cathepsin zymography, we describe reduced electrophoresis time from 2 h to 10 min, incubation assay time from overnight to 4 h, and reduced minimal tissue protein necessary while maintaining sensitive detection limits; an evaluation of the pros and cons of each modification is also included. We further describe image acquisition by Smartphone camera, export to Matlab, and densitometric analysis code to quantify and report cathepsin activity, adding portability and replacing large scale, darkbox imaging equipment that could be cost prohibitive in limited resource settings.

  2. Systematic optimization of multiplex zymography protocol to detect active cathepsins K, L, S, and V in healthy and diseased tissue: compromise between limits of detection, reduced time, and resources

    Science.gov (United States)

    Dumas, Jerald E.; Platt, Manu O.

    2013-01-01

    Cysteine cathepsins are a family of proteases identified in cancer, atherosclerosis, osteoporosis, arthritis and a number of other diseases. As this number continues to rise, so does the need for low cost, broad use quantitative assays to detect their activity and can be translated to the clinic in the hospital or in low resource settings. Multiplex cathepsin zymography is one such assay that detects subnanomolar levels of active cathepsins K, L, S, and V in cell or tissue preparations observed as cleared bands of proteolytic activity after gelatin substrate SDS-PAGE with conditions optimal for cathepsin renaturing and activity. Densitometric analysis of the zymogram provides quantitative information from this low cost assay. After systematic modifications to optimize cathepsin zymography, we describe reduced electrophoresis time from 2 hours to 10 minutes, incubation assay time from overnight to 4 hours, and reduced minimal tissue protein necessary while maintaining sensitive detection limits; an evaluation of the pros and cons of each modification is also included. We further describe image acquisition by smartphone camera, export to Matlab, and densitometric analysis code to quantify and report cathepsin activity, adding portability and replacing large scale, darkbox imaging equipment that could be cost prohibitive in limited resource settings. PMID:23532386

  3. The effect of biomechanical variables on force sensitive resistor error: Implications for calibration and improved accuracy.

    Science.gov (United States)

    Schofield, Jonathon S; Evans, Katherine R; Hebert, Jacqueline S; Marasco, Paul D; Carey, Jason P

    2016-03-21

    Force Sensitive Resistors (FSRs) are commercially available thin film polymer sensors commonly employed in a multitude of biomechanical measurement environments. Reasons for such wide spread usage lie in the versatility, small profile, and low cost of these sensors. Yet FSRs have limitations. It is commonly accepted that temperature, curvature and biological tissue compliance may impact sensor conductance and resulting force readings. The effect of these variables and degree to which they interact has yet to be comprehensively investigated and quantified. This work systematically assesses varying levels of temperature, sensor curvature and surface compliance using a full factorial design-of-experiments approach. Three models of Interlink FSRs were evaluated. Calibration equations under 12 unique combinations of temperature, curvature and compliance were determined for each sensor. Root mean squared error, mean absolute error, and maximum error were quantified as measures of the impact these thermo/mechanical factors have on sensor performance. It was found that all three variables have the potential to affect FSR calibration curves. The FSR model and corresponding sensor geometry are sensitive to these three mechanical factors at varying levels. Experimental results suggest that reducing sensor error requires calibration of each sensor in an environment as close to its intended use as possible and if multiple FSRs are used in a system, they must be calibrated independently. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Minimum Error Entropy Classification

    CERN Document Server

    Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A

    2013-01-01

    This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.

  5. Accuracy Improvement of Multi-Axis Systems Based on Laser Correction of Volumetric Geometric Errors

    Science.gov (United States)

    Teleshevsky, V. I.; Sokolov, V. A.; Pimushkin, Ya I.

    2018-04-01

    The article describes a volumetric geometric errors correction method for CNC- controlled multi-axis systems (machine-tools, CMMs etc.). The Kalman’s concept of “Control and Observation” is used. A versatile multi-function laser interferometer is used as Observer in order to measure machine’s error functions. A systematic error map of machine’s workspace is produced based on error functions measurements. The error map results into error correction strategy. The article proposes a new method of error correction strategy forming. The method is based on error distribution within machine’s workspace and a CNC-program postprocessor. The postprocessor provides minimal error values within maximal workspace zone. The results are confirmed by error correction of precision CNC machine-tools.

  6. Errors and mistakes in breast ultrasound diagnostics

    Directory of Open Access Journals (Sweden)

    Wiesław Jakubowski

    2012-09-01

    Full Text Available Sonomammography is often the first additional examination performed in the diagnostics of breast diseases. The development of ultrasound imaging techniques, particularly the introduction of high frequency transducers, matrix transducers, harmonic imaging and finally, elastography, influenced the improvement of breast disease diagnostics. Neverthe‑ less, as in each imaging method, there are errors and mistakes resulting from the techni‑ cal limitations of the method, breast anatomy (fibrous remodeling, insufficient sensitivity and, in particular, specificity. Errors in breast ultrasound diagnostics can be divided into impossible to be avoided and potentially possible to be reduced. In this article the most frequently made errors in ultrasound have been presented, including the ones caused by the presence of artifacts resulting from volumetric averaging in the near and far field, artifacts in cysts or in dilated lactiferous ducts (reverberations, comet tail artifacts, lateral beam artifacts, improper setting of general enhancement or time gain curve or range. Errors dependent on the examiner, resulting in the wrong BIRADS‑usg classification, are divided into negative and positive errors. The sources of these errors have been listed. The methods of minimization of the number of errors made have been discussed, includ‑ ing the ones related to the appropriate examination technique, taking into account data from case history and the use of the greatest possible number of additional options such as: harmonic imaging, color and power Doppler and elastography. In the article examples of errors resulting from the technical conditions of the method have been presented, and those dependent on the examiner which are related to the great diversity and variation of ultrasound images of pathological breast lesions.

  7. Adjusting estimative prediction limits

    OpenAIRE

    Masao Ueki; Kaoru Fueda

    2007-01-01

    This note presents a direct adjustment of the estimative prediction limit to reduce the coverage error from a target value to third-order accuracy. The adjustment is asymptotically equivalent to those of Barndorff-Nielsen & Cox (1994, 1996) and Vidoni (1998). It has a simpler form with a plug-in estimator of the coverage probability of the estimative limit at the target value. Copyright 2007, Oxford University Press.

  8. Effects of systematic sampling on satellite estimates of deforestation rates

    International Nuclear Information System (INIS)

    Steininger, M K; Godoy, F; Harper, G

    2009-01-01

    Options for satellite monitoring of deforestation rates over large areas include the use of sampling. Sampling may reduce the cost of monitoring but is also a source of error in estimates of areas and rates. A common sampling approach is systematic sampling, in which sample units of a constant size are distributed in some regular manner, such as a grid. The proposed approach for the 2010 Forest Resources Assessment (FRA) of the UN Food and Agriculture Organization (FAO) is a systematic sample of 10 km wide squares at every 1 deg. intersection of latitude and longitude. We assessed the outcome of this and other systematic samples for estimating deforestation at national, sub-national and continental levels. The study is based on digital data on deforestation patterns for the five Amazonian countries outside Brazil plus the Brazilian Amazon. We tested these schemes by varying sample-unit size and frequency. We calculated two estimates of sampling error. First we calculated the standard errors, based on the size, variance and covariance of the samples, and from this calculated the 95% confidence intervals (CI). Second, we calculated the actual errors, based on the difference between the sample-based estimates and the estimates from the full-coverage maps. At the continental level, the 1 deg., 10 km scheme had a CI of 21% and an actual error of 8%. At the national level, this scheme had CIs of 126% for Ecuador and up to 67% for other countries. At this level, increasing sampling density to every 0.25 deg. produced a CI of 32% for Ecuador and CIs of up to 25% for other countries, with only Brazil having a CI of less than 10%. Actual errors were within the limits of the CIs in all but two of the 56 cases. Actual errors were half or less of the CIs in all but eight of these cases. These results indicate that the FRA 2010 should have CIs of smaller than or close to 10% at the continental level. However, systematic sampling at the national level yields large CIs unless the

  9. A Systematic Evaluation of Ultrasound-based Fetal Weight Estimation Models on Indian Population

    Directory of Open Access Journals (Sweden)

    Sujitkumar S. Hiwale

    2017-12-01

    Conclusion: We found that the existing fetal weight estimation models have high systematic and random errors on Indian population, with a general tendency of overestimation of fetal weight in the LBW category and underestimation in the HBW category. We also observed that these models have a limited ability to predict babies at a risk of either low or high birth weight. It is recommended that the clinicians should consider all these factors, while interpreting estimated weight given by the existing models.

  10. Standard Errors for Matrix Correlations.

    Science.gov (United States)

    Ogasawara, Haruhiko

    1999-01-01

    Derives the asymptotic standard errors and intercorrelations for several matrix correlations assuming multivariate normality for manifest variables and derives the asymptotic standard errors of the matrix correlations for two factor-loading matrices. (SLD)

  11. Radon measurements-discussion of error estimates for selected methods

    International Nuclear Information System (INIS)

    Zhukovsky, Michael; Onischenko, Alexandra; Bastrikov, Vladislav

    2010-01-01

    The main sources of uncertainties for grab sampling, short-term (charcoal canisters) and long term (track detectors) measurements are: systematic bias of reference equipment; random Poisson and non-Poisson errors during calibration; random Poisson and non-Poisson errors during measurements. The origins of non-Poisson random errors during calibration are different for different kinds of instrumental measurements. The main sources of uncertainties for retrospective measurements conducted by surface traps techniques can be divided in two groups: errors of surface 210 Pb ( 210 Po) activity measurements and uncertainties of transfer from 210 Pb surface activity in glass objects to average radon concentration during this object exposure. It's shown that total measurement error of surface trap retrospective technique can be decreased to 35%.

  12. Nucleon form factors. Probing the chiral limit

    Energy Technology Data Exchange (ETDEWEB)

    Goeckeler, M. [Regensburg Univ. (Germany). Inst. fuer Theoretische Physik; Haegler, P. [Technische Univ. Muenchen, Garching (Germany). Physik-Dept.; Horsley, R. [Edinburgh Univ. (GB). School of Physics] (and others)

    2006-10-15

    The electromagnetic form factors provide important hints for the internal structure of the nucleon and continue to be of major interest for experimentalists. For an intermediate range of momentum transfers the form factors can be calculated on the lattice. However, reliability of the results is limited by systematic errors due to the required extrapolation to physical quark masses. Chiral effective field theories predict a rather strong quark mass dependence in a range which was yet unaccessible for lattice simulations. We give an update on recent results from the QCDSF collaboration using gauge configurations with Nf=2, non-perturbatively O(a)-improved Wilson fermions at very small quark masses down to 340 MeV pion mass, where we start to probe the relevant quark mass region. (orig.)

  13. Nucleon form factors. Probing the chiral limit

    International Nuclear Information System (INIS)

    Goeckeler, M.; Haegler, P.; Horsley, R.

    2006-10-01

    The electromagnetic form factors provide important hints for the internal structure of the nucleon and continue to be of major interest for experimentalists. For an intermediate range of momentum transfers the form factors can be calculated on the lattice. However, reliability of the results is limited by systematic errors due to the required extrapolation to physical quark masses. Chiral effective field theories predict a rather strong quark mass dependence in a range which was yet unaccessible for lattice simulations. We give an update on recent results from the QCDSF collaboration using gauge configurations with Nf=2, non-perturbatively O(a)-improved Wilson fermions at very small quark masses down to 340 MeV pion mass, where we start to probe the relevant quark mass region. (orig.)

  14. Error forecasting schemes of error correction at receiver

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2007-08-01

    To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)

  15. Evaluating a medical error taxonomy.

    OpenAIRE

    Brixey, Juliana; Johnson, Todd R.; Zhang, Jiajie

    2002-01-01

    Healthcare has been slow in using human factors principles to reduce medical errors. The Center for Devices and Radiological Health (CDRH) recognizes that a lack of attention to human factors during product development may lead to errors that have the potential for patient injury, or even death. In response to the need for reducing medication errors, the National Coordinating Council for Medication Errors Reporting and Prevention (NCC MERP) released the NCC MERP taxonomy that provides a stand...

  16. Advancing the research agenda for diagnostic error reduction.

    Science.gov (United States)

    Zwaan, Laura; Schiff, Gordon D; Singh, Hardeep

    2013-10-01

    Diagnostic errors remain an underemphasised and understudied area of patient safety research. We briefly summarise the methods that have been used to conduct research on epidemiology, contributing factors and interventions related to diagnostic error and outline directions for future research. Research methods that have studied epidemiology of diagnostic error provide some estimate on diagnostic error rates. However, there appears to be a large variability in the reported rates due to the heterogeneity of definitions and study methods used. Thus, future methods should focus on obtaining more precise estimates in different settings of care. This would lay the foundation for measuring error rates over time to evaluate improvements. Research methods have studied contributing factors for diagnostic error in both naturalistic and experimental settings. Both approaches have revealed important and complementary information. Newer conceptual models from outside healthcare are needed to advance the depth and rigour of analysis of systems and cognitive insights of causes of error. While the literature has suggested many potentially fruitful interventions for reducing diagnostic errors, most have not been systematically evaluated and/or widely implemented in practice. Research is needed to study promising intervention areas such as enhanced patient involvement in diagnosis, improving diagnosis through the use of electronic tools and identification and reduction of specific diagnostic process 'pitfalls' (eg, failure to conduct appropriate diagnostic evaluation of a breast lump after a 'normal' mammogram). The last decade of research on diagnostic error has made promising steps and laid a foundation for more rigorous methods to advance the field.

  17. Error Patterns in Problem Solving.

    Science.gov (United States)

    Babbitt, Beatrice C.

    Although many common problem-solving errors within the realm of school mathematics have been previously identified, a compilation of such errors is not readily available within learning disabilities textbooks, mathematics education texts, or teacher's manuals for school mathematics texts. Using data on error frequencies drawn from both the Fourth…

  18. Performance, postmodernity and errors

    DEFF Research Database (Denmark)

    Harder, Peter

    2013-01-01

    speaker’s competency (note the –y ending!) reflects adaptation to the community langue, including variations. This reversal of perspective also reverses our understanding of the relationship between structure and deviation. In the heyday of structuralism, it was tempting to confuse the invariant system...... with the prestige variety, and conflate non-standard variation with parole/performance and class both as erroneous. Nowadays the anti-structural sentiment of present-day linguistics makes it tempting to confuse the rejection of ideal abstract structure with a rejection of any distinction between grammatical...... as deviant from the perspective of function-based structure and discuss to what extent the recognition of a community langue as a source of adaptive pressure may throw light on different types of deviation, including language handicaps and learner errors....

  19. Error suppression and error correction in adiabatic quantum computation: non-equilibrium dynamics

    International Nuclear Information System (INIS)

    Sarovar, Mohan; Young, Kevin C

    2013-01-01

    While adiabatic quantum computing (AQC) has some robustness to noise and decoherence, it is widely believed that encoding, error suppression and error correction will be required to scale AQC to large problem sizes. Previous works have established at least two different techniques for error suppression in AQC. In this paper we derive a model for describing the dynamics of encoded AQC and show that previous constructions for error suppression can be unified with this dynamical model. In addition, the model clarifies the mechanisms of error suppression and allows the identification of its weaknesses. In the second half of the paper, we utilize our description of non-equilibrium dynamics in encoded AQC to construct methods for error correction in AQC by cooling local degrees of freedom (qubits). While this is shown to be possible in principle, we also identify the key challenge to this approach: the requirement of high-weight Hamiltonians. Finally, we use our dynamical model to perform a simplified thermal stability analysis of concatenated-stabilizer-code encoded many-body systems for AQC or quantum memories. This work is a companion paper to ‘Error suppression and error correction in adiabatic quantum computation: techniques and challenges (2013 Phys. Rev. X 3 041013)’, which provides a quantum information perspective on the techniques and limitations of error suppression and correction in AQC. In this paper we couch the same results within a dynamical framework, which allows for a detailed analysis of the non-equilibrium dynamics of error suppression and correction in encoded AQC. (paper)

  20. Analysis of the interface tracking errors

    International Nuclear Information System (INIS)

    Cerne, G.; Tiselj, I.; Petelin, S.

    2001-01-01

    An important limitation of the interface-tracking algorithm is the grid density, which determines the space scale of the surface tracking. In this paper the analysis of the interface tracking errors, which occur in a dispersed flow, is performed for the VOF interface tracking method. A few simple two-fluid tests are proposed for the investigation of the interface tracking errors and their grid dependence. When the grid density becomes too coarse to follow the interface changes, the errors can be reduced either by using denser nodalization or by switching to the two-fluid model during the simulation. Both solutions are analyzed and compared on a simple vortex-flow test.(author)

  1. Comparison of discovery limits for extra Z bosons at future colliders

    International Nuclear Information System (INIS)

    Godfrey, S.

    1995-01-01

    We study and compare the discovery potential for heavy neutral gauge bosons (Z') at various e + e - and pp (-) colliders that are planned or have been proposed. Typical discovery limits are for the Fermilab Tevatron ∼1 TeV, Di-Tevatron ∼2 TeV, CERN LHC ∼4 TeV, LSGNA (a 60 TeV pp collider) ∼13 TeV while the e + e - discovery limits are 2--10x √s with the large variation reflecting the model dependence of the limits. While both types of colliders have comparable discovery limits the hadron colliders are generally less dependent on the specific Z' model and provide more robust limits since the signal has little background. In contrast, discovery limits for e + e - limits are more model dependent and, because they are based on indirect inferences of deviations from standard model predictions, they are more sensitive to systematic errors

  2. Modeling the North American vertical datum of 1988 errors in the conterminous United States

    Science.gov (United States)

    Li, X.

    2018-02-01

    A large systematic difference (ranging from -20 cm to +130 cm) was found between NAVD 88 (North AmericanVertical Datum of 1988) and the pure gravimetric geoid models. This difference not only makes it very difficult to augment the local geoid model by directly using the vast NAVD 88 network with state-of-the-art technologies recently developed in geodesy, but also limits the ability of researchers to effectively demonstrate the geoid model improvements on the NAVD 88 network. Here, both conventional regression analyses based on various predefined basis functions such as polynomials, B-splines, and Legendre functions and the Latent Variable Analysis (LVA) such as the Factor Analysis (FA) are used to analyze the systematic difference. Besides giving a mathematical model, the regression results do not reveal a great deal about the physical reasons that caused the large differences in NAVD 88, which may be of interest to various researchers. Furthermore, there is still a significant amount of no-Gaussian signals left in the residuals of the conventional regression models. On the other side, the FA method not only provides a better not of the data, but also offers possible explanations of the error sources. Without requiring extra hypothesis tests on the model coefficients, the results from FA are more efficient in terms of capturing the systematic difference. Furthermore, without using a covariance model, a novel interpolating method based on the relationship between the loading matrix and the factor scores is developed for predictive purposes. The prediction error analysis shows that about 3-7 cm precision is expected in NAVD 88 after removing the systematic difference.

  3. Modeling the North American vertical datum of 1988 errors in the conterminous United States

    Directory of Open Access Journals (Sweden)

    Li X.

    2018-02-01

    Full Text Available A large systematic difference (ranging from −20 cm to +130 cm was found between NAVD 88 (North AmericanVertical Datum of 1988 and the pure gravimetric geoid models. This difference not only makes it very difficult to augment the local geoid model by directly using the vast NAVD 88 network with state-of-the-art technologies recently developed in geodesy, but also limits the ability of researchers to effectively demonstrate the geoid model improvements on the NAVD 88 network. Here, both conventional regression analyses based on various predefined basis functions such as polynomials, B-splines, and Legendre functions and the Latent Variable Analysis (LVA such as the Factor Analysis (FA are used to analyze the systematic difference. Besides giving a mathematical model, the regression results do not reveal a great deal about the physical reasons that caused the large differences in NAVD 88, which may be of interest to various researchers. Furthermore, there is still a significant amount of no-Gaussian signals left in the residuals of the conventional regression models. On the other side, the FA method not only provides a better not of the data, but also offers possible explanations of the error sources. Without requiring extra hypothesis tests on the model coefficients, the results from FA are more efficient in terms of capturing the systematic difference. Furthermore, without using a covariance model, a novel interpolating method based on the relationship between the loading matrix and the factor scores is developed for predictive purposes. The prediction error analysis shows that about 3-7 cm precision is expected in NAVD 88 after removing the systematic difference.

  4. Human error in strabismus surgery: Quantification with a sensitivity analysis

    NARCIS (Netherlands)

    S. Schutte (Sander); J.R. Polling (Jan Roelof); F.C.T. van der Helm (Frans); H.J. Simonsz (Huib)

    2009-01-01

    textabstractBackground: Reoperations are frequently necessary in strabismus surgery. The goal of this study was to analyze human-error related factors that introduce variability in the results of strabismus surgery in a systematic fashion. Methods: We identified the primary factors that influence

  5. Human error in strabismus surgery : Quantification with a sensitivity analysis

    NARCIS (Netherlands)

    Schutte, S.; Polling, J.R.; Van der Helm, F.C.T.; Simonsz, H.J.

    2008-01-01

    Background- Reoperations are frequently necessary in strabismus surgery. The goal of this study was to analyze human-error related factors that introduce variability in the results of strabismus surgery in a systematic fashion. Methods- We identified the primary factors that influence the outcome of

  6. Evaluation of positioning errors of the patient using cone beam CT megavoltage; Evaluacion de errores de posicionamiento del paciente mediante Cone Beam CT de megavoltaje

    Energy Technology Data Exchange (ETDEWEB)

    Garcia Ruiz-Zorrilla, J.; Fernandez Leton, J. P.; Zucca Aparicio, D.; Perez Moreno, J. M.; Minambres Moro, A.

    2013-07-01

    Image-guided radiation therapy allows you to assess and fix the positioning of the patient in the treatment unit, thus reducing the uncertainties due to the positioning of the patient. This work assesses errors systematic and errors of randomness from the corrections made to a series of patients of different diseases through a protocol off line of cone beam CT (CBCT) megavoltage. (Author)

  7. Analysis of error-correction constraints in an optical disk

    Science.gov (United States)

    Roberts, Jonathan D.; Ryley, Alan; Jones, David M.; Burke, David

    1996-07-01

    The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.

  8. Controlling errors in unidosis carts

    Directory of Open Access Journals (Sweden)

    Inmaculada Díaz Fernández

    2010-01-01

    Full Text Available Objective: To identify errors in the unidosis system carts. Method: For two months, the Pharmacy Service controlled medication either returned or missing from the unidosis carts both in the pharmacy and in the wards. Results: Uncorrected unidosis carts show a 0.9% of medication errors (264 versus 0.6% (154 which appeared in unidosis carts previously revised. In carts not revised, the error is 70.83% and mainly caused when setting up unidosis carts. The rest are due to a lack of stock or unavailability (21.6%, errors in the transcription of medical orders (6.81% or that the boxes had not been emptied previously (0.76%. The errors found in the units correspond to errors in the transcription of the treatment (3.46%, non-receipt of the unidosis copy (23.14%, the patient did not take the medication (14.36%or was discharged without medication (12.77%, was not provided by nurses (14.09%, was withdrawn from the stocks of the unit (14.62%, and errors of the pharmacy service (17.56% . Conclusions: It is concluded the need to redress unidosis carts and a computerized prescription system to avoid errors in transcription.Discussion: A high percentage of medication errors is caused by human error. If unidosis carts are overlooked before sent to hospitalization units, the error diminishes to 0.3%.

  9. Prioritising interventions against medication errors

    DEFF Research Database (Denmark)

    Lisby, Marianne; Pape-Larsen, Louise; Sørensen, Ann Lykkegaard

    errors are therefore needed. Development of definition: A definition of medication errors including an index of error types for each stage in the medication process was developed from existing terminology and through a modified Delphi-process in 2008. The Delphi panel consisted of 25 interdisciplinary......Abstract Authors: Lisby M, Larsen LP, Soerensen AL, Nielsen LP, Mainz J Title: Prioritising interventions against medication errors – the importance of a definition Objective: To develop and test a restricted definition of medication errors across health care settings in Denmark Methods: Medication...... errors constitute a major quality and safety problem in modern healthcare. However, far from all are clinically important. The prevalence of medication errors ranges from 2-75% indicating a global problem in defining and measuring these [1]. New cut-of levels focusing the clinical impact of medication...

  10. Social aspects of clinical errors.

    Science.gov (United States)

    Richman, Joel; Mason, Tom; Mason-Whitehead, Elizabeth; McIntosh, Annette; Mercer, Dave

    2009-08-01

    Clinical errors, whether committed by doctors, nurses or other professions allied to healthcare, remain a sensitive issue requiring open debate and policy formulation in order to reduce them. The literature suggests that the issues underpinning errors made by healthcare professionals involve concerns about patient safety, professional disclosure, apology, litigation, compensation, processes of recording and policy development to enhance quality service. Anecdotally, we are aware of narratives of minor errors, which may well have been covered up and remain officially undisclosed whilst the major errors resulting in damage and death to patients alarm both professionals and public with resultant litigation and compensation. This paper attempts to unravel some of these issues by highlighting the historical nature of clinical errors and drawing parallels to contemporary times by outlining the 'compensation culture'. We then provide an overview of what constitutes a clinical error and review the healthcare professional strategies for managing such errors.

  11. Modelling the Errors of EIA’s Oil Prices and Production Forecasts by the Grey Markov Model

    Directory of Open Access Journals (Sweden)

    Gholam Hossein Hasantash

    2012-01-01

    Full Text Available Grey theory is about systematic analysis of limited information. The Grey-Markov model can improve the accuracy of forecast range in the random fluctuating data sequence. In this paper, we employed this model in energy system. The average errors of Energy Information Administrations predictions for world oil price and domestic crude oil production from 1982 to 2007 and from 1985 to 2008 respectively were used as two forecasted examples. We showed that the proposed Grey-Markov model can improve the forecast accuracy of original Grey forecast model.

  12. Systematic errors in ground heat flux estimation and their correction

    NARCIS (Netherlands)

    Gentine, P.; Entekhabi, D.; Heusinkveld, B.G.

    2012-01-01

    Incoming radiation forcing at the land surface is partitioned among the components of the surface energy balance in varying proportions depending on the time scale of the forcing. Based on a land-atmosphere analytic continuum model, a numerical land-surface model and field observations we show that

  13. Fundamental limits on beam stability at the Advanced Photon Source

    International Nuclear Information System (INIS)

    Decker, G. A.

    1998-01-01

    Orbit correction is now routinely performed at the few-micron level in the Advanced Photon Source (APS) storage ring. Three diagnostics are presently in use to measure and control both AC and DC orbit motions: broad-band turn-by-turn rf beam position monitors (BPMs), narrow-band switched heterodyne receivers, and photoemission-style x-ray beam position monitors. Each type of diagnostic has its own set of systematic error effects that place limits on the ultimate pointing stability of x-ray beams supplied to users at the APS. Limiting sources of beam motion at present are magnet power supply noise, girder vibration, and thermal timescale vacuum chamber and girder motion. This paper will investigate the present limitations on orbit correction, and will delve into the upgrades necessary to achieve true sub-micron beam stability

  14. Eigen's Error Threshold and Mutational Meltdown in a Quasispecies Model

    OpenAIRE

    Bagnoli, F.; Bezzi, M.

    1998-01-01

    We introduce a toy model for interacting populations connected by mutations and limited by a shared resource. We study the presence of Eigen's error threshold and mutational meltdown. The phase diagram of the system shows that the extinction of the whole population due to mutational meltdown can occur well before an eventual error threshold transition.

  15. Reducing diagnostic errors in medicine: what's the goal?

    Science.gov (United States)

    Graber, Mark; Gordon, Ruthanna; Franklin, Nancy

    2002-10-01

    This review considers the feasibility of reducing or eliminating the three major categories of diagnostic errors in medicine: "No-fault errors" occur when the disease is silent, presents atypically, or mimics something more common. These errors will inevitably decline as medical science advances, new syndromes are identified, and diseases can be detected more accurately or at earlier stages. These errors can never be eradicated, unfortunately, because new diseases emerge, tests are never perfect, patients are sometimes noncompliant, and physicians will inevitably, at times, choose the most likely diagnosis over the correct one, illustrating the concept of necessary fallibility and the probabilistic nature of choosing a diagnosis. "System errors" play a role when diagnosis is delayed or missed because of latent imperfections in the health care system. These errors can be reduced by system improvements, but can never be eliminated because these improvements lag behind and degrade over time, and each new fix creates the opportunity for novel errors. Tradeoffs also guarantee system errors will persist, when resources are just shifted. "Cognitive errors" reflect misdiagnosis from faulty data collection or interpretation, flawed reasoning, or incomplete knowledge. The limitations of human processing and the inherent biases in using heuristics guarantee that these errors will persist. Opportunities exist, however, for improving the cognitive aspect of diagnosis by adopting system-level changes (e.g., second opinions, decision-support systems, enhanced access to specialists) and by training designed to improve cognition or cognitive awareness. Diagnostic error can be substantially reduced, but never eradicated.

  16. Effect of patient setup errors on simultaneously integrated boost head and neck IMRT treatment plans

    International Nuclear Information System (INIS)

    Siebers, Jeffrey V.; Keall, Paul J.; Wu Qiuwen; Williamson, Jeffrey F.; Schmidt-Ullrich, Rupert K.

    2005-01-01

    Purpose: The purpose of this study is to determine dose delivery errors that could result from random and systematic setup errors for head-and-neck patients treated using the simultaneous integrated boost (SIB)-intensity-modulated radiation therapy (IMRT) technique. Methods and Materials: Twenty-four patients who participated in an intramural Phase I/II parotid-sparing IMRT dose-escalation protocol using the SIB treatment technique had their dose distributions reevaluated to assess the impact of random and systematic setup errors. The dosimetric effect of random setup error was simulated by convolving the two-dimensional fluence distribution of each beam with the random setup error probability density distribution. Random setup errors of σ = 1, 3, and 5 mm were simulated. Systematic setup errors were simulated by randomly shifting the patient isocenter along each of the three Cartesian axes, with each shift selected from a normal distribution. Systematic setup error distributions with Σ = 1.5 and 3.0 mm along each axis were simulated. Combined systematic and random setup errors were simulated for σ = Σ = 1.5 and 3.0 mm along each axis. For each dose calculation, the gross tumor volume (GTV) received by 98% of the volume (D 98 ), clinical target volume (CTV) D 90 , nodes D 90 , cord D 2 , and parotid D 50 and parotid mean dose were evaluated with respect to the plan used for treatment for the structure dose and for an effective planning target volume (PTV) with a 3-mm margin. Results: Simultaneous integrated boost-IMRT head-and-neck treatment plans were found to be less sensitive to random setup errors than to systematic setup errors. For random-only errors, errors exceeded 3% only when the random setup error σ exceeded 3 mm. Simulated systematic setup errors with Σ = 1.5 mm resulted in approximately 10% of plan having more than a 3% dose error, whereas a Σ = 3.0 mm resulted in half of the plans having more than a 3% dose error and 28% with a 5% dose error

  17. Role of memory errors in quantum repeaters

    International Nuclear Information System (INIS)

    Hartmann, L.; Kraus, B.; Briegel, H.-J.; Duer, W.

    2007-01-01

    We investigate the influence of memory errors in the quantum repeater scheme for long-range quantum communication. We show that the communication distance is limited in standard operation mode due to memory errors resulting from unavoidable waiting times for classical signals. We show how to overcome these limitations by (i) improving local memory and (ii) introducing two operational modes of the quantum repeater. In both operational modes, the repeater is run blindly, i.e., without waiting for classical signals to arrive. In the first scheme, entanglement purification protocols based on one-way classical communication are used allowing to communicate over arbitrary distances. However, the error thresholds for noise in local control operations are very stringent. The second scheme makes use of entanglement purification protocols with two-way classical communication and inherits the favorable error thresholds of the repeater run in standard mode. One can increase the possible communication distance by an order of magnitude with reasonable overhead in physical resources. We outline the architecture of a quantum repeater that can possibly ensure intercontinental quantum communication

  18. Errors in clinical laboratories or errors in laboratory medicine?

    Science.gov (United States)

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  19. Errors in abdominal computed tomography

    International Nuclear Information System (INIS)

    Stephens, S.; Marting, I.; Dixon, A.K.

    1989-01-01

    Sixty-nine patients are presented in whom a substantial error was made on the initial abdominal computed tomography report. Certain features of these errors have been analysed. In 30 (43.5%) a lesion was simply not recognised (error of observation); in 39 (56.5%) the wrong conclusions were drawn about the nature of normal or abnormal structures (error of interpretation). The 39 errors of interpretation were more complex; in 7 patients an abnormal structure was noted but interpreted as normal, whereas in four a normal structure was thought to represent a lesion. Other interpretive errors included those where the wrong cause for a lesion had been ascribed (24 patients), and those where the abnormality was substantially under-reported (4 patients). Various features of these errors are presented and discussed. Errors were made just as often in relation to small and large lesions. Consultants made as many errors as senior registrar radiologists. It is like that dual reporting is the best method of avoiding such errors and, indeed, this is widely practised in our unit. (Author). 9 refs.; 5 figs.; 1 tab

  20. THE DISKMASS SURVEY. II. ERROR BUDGET

    International Nuclear Information System (INIS)

    Bershady, Matthew A.; Westfall, Kyle B.; Verheijen, Marc A. W.; Martinsson, Thomas; Andersen, David R.; Swaters, Rob A.

    2010-01-01

    We present a performance analysis of the DiskMass Survey. The survey uses collisionless tracers in the form of disk stars to measure the surface density of spiral disks, to provide an absolute calibration of the stellar mass-to-light ratio (Υ * ), and to yield robust estimates of the dark-matter halo density profile in the inner regions of galaxies. We find that a disk inclination range of 25 0 -35 0 is optimal for our measurements, consistent with our survey design to select nearly face-on galaxies. Uncertainties in disk scale heights are significant, but can be estimated from radial scale lengths to 25% now, and more precisely in the future. We detail the spectroscopic analysis used to derive line-of-sight velocity dispersions, precise at low surface-brightness, and accurate in the presence of composite stellar populations. Our methods take full advantage of large-grasp integral-field spectroscopy and an extensive library of observed stars. We show that the baryon-to-total mass fraction (F bar ) is not a well-defined observational quantity because it is coupled to the halo mass model. This remains true even when the disk mass is known and spatially extended rotation curves are available. In contrast, the fraction of the rotation speed supplied by the disk at 2.2 scale lengths (disk maximality) is a robust observational indicator of the baryonic disk contribution to the potential. We construct the error budget for the key quantities: dynamical disk mass surface density (Σ dyn ), disk stellar mass-to-light ratio (Υ disk * ), and disk maximality (F *,max disk ≡V disk *,max / V c ). Random and systematic errors in these quantities for individual galaxies will be ∼25%, while survey precision for sample quartiles are reduced to 10%, largely devoid of systematic errors outside of distance uncertainties.

  1. Consequences of leaf calibration errors on IMRT delivery

    International Nuclear Information System (INIS)

    Sastre-Padro, M; Welleweerd, J; Malinen, E; Eilertsen, K; Olsen, D R; Heide, U A van der

    2007-01-01

    IMRT treatments using multi-leaf collimators may involve a large number of segments in order to spare the organs at risk. When a large proportion of these segments are small, leaf positioning errors may become relevant and have therapeutic consequences. The performance of four head and neck IMRT treatments under eight different cases of leaf positioning errors has been studied. Systematic leaf pair offset errors in the range of ±2.0 mm were introduced, thus modifying the segment sizes of the original IMRT plans. Thirty-six films were irradiated with the original and modified segments. The dose difference and the gamma index (with 2%/2 mm criteria) were used for evaluating the discrepancies between the irradiated films. The median dose differences were linearly related to the simulated leaf pair errors. In the worst case, a 2.0 mm error generated a median dose difference of 1.5%. Following the gamma analysis, two out of the 32 modified plans were not acceptable. In conclusion, small systematic leaf bank positioning errors have a measurable impact on the delivered dose and may have consequences for the therapeutic outcome of IMRT

  2. The probability and the management of human error

    International Nuclear Information System (INIS)

    Dufey, R.B.; Saull, J.W.

    2004-01-01

    Embedded within modern technological systems, human error is the largest, and indeed dominant contributor to accident cause. The consequences dominate the risk profiles for nuclear power and for many other technologies. We need to quantify the probability of human error for the system as an integral contribution within the overall system failure, as it is generally not separable or predictable for actual events. We also need to provide a means to manage and effectively reduce the failure (error) rate. The fact that humans learn from their mistakes allows a new determination of the dynamic probability and human failure (error) rate in technological systems. The result is consistent with and derived from the available world data for modern technological systems. Comparisons are made to actual data from large technological systems and recent catastrophes. Best estimate values and relationships can be derived for both the human error rate, and for the probability. We describe the potential for new approaches to the management of human error and safety indicators, based on the principles of error state exclusion and of the systematic effect of learning. A new equation is given for the probability of human error (λ) that combines the influences of early inexperience, learning from experience (ε) and stochastic occurrences with having a finite minimum rate, this equation is λ 5.10 -5 + ((1/ε) - 5.10 -5 ) exp(-3*ε). The future failure rate is entirely determined by the experience: thus the past defines the future

  3. Hospital medication errors in a pharmacovigilance system in Colombia

    Directory of Open Access Journals (Sweden)

    Jorge Enrique Machado-Alba

    2015-11-01

    Full Text Available Objective: this study analyzes the medication errors reported to a pharmacovigilance system by 26 hospitals for patients in the healthcare system of Colombia. Methods: this retrospective study analyzed the medication errors reported to a systematized database between 1 January 2008 and 12 September 2013. The medication is dispensed by the company Audifarma S.A. to hospitals and clinics around Colombia. Data were classified according to the taxonomy of the National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP. The data analysis was performed using SPSS 22.0 for Windows, considering p-values < 0.05 significant. Results: there were 9 062 medication errors in 45 hospital pharmacies. Real errors accounted for 51.9% (n = 4 707, of which 12.0% (n = 567 reached the patient (Categories C to I and caused harm (Categories E to I to 17 subjects (0.36%. The main process involved in errors that occurred (categories B to I was prescription (n = 1 758, 37.3%, followed by dispensation (n = 1 737, 36.9%, transcription (n = 970, 20.6% and administration (n = 242, 5.1%. The errors in the administration process were 45.2 times more likely to reach the patient (CI 95%: 20.2–100.9. Conclusions: medication error reporting systems and prevention strategies should be widespread in hospital settings, prioritizing efforts to address the administration process.

  4. Local systematic differences in 2MASS positions

    Science.gov (United States)

    Bustos Fierro, I. H.; Calderón, J. H.

    2018-01-01

    We have found that positions in the 2MASS All-sky Catalog of Point Sources show local systematic differences with characteristic length-scales of ˜ 5 to ˜ 8 arcminutes when compared with several catalogs. We have observed that when 2MASS positions are used in the computation of proper motions, the mentioned systematic differences cause systematic errors in the resulting proper motions. We have developed a method to locally rectify 2MASS with respect to UCAC4 in order to diminish the systematic differences between these catalogs. The rectified 2MASS catalog with the proposed method can be regarded as an extension of UCAC4 for astrometry with accuracy ˜ 90 mas in its positions, with negligible systematic errors. Also we show that the use of these rectified positions removes the observed systematic pattern in proper motions derived from original 2MASS positions.

  5. Laboratory errors and patient safety.

    Science.gov (United States)

    Miligy, Dawlat A

    2015-01-01

    Laboratory data are extensively used in medical practice; consequently, laboratory errors have a tremendous impact on patient safety. Therefore, programs designed to identify and reduce laboratory errors, as well as, setting specific strategies are required to minimize these errors and improve patient safety. The purpose of this paper is to identify part of the commonly encountered laboratory errors throughout our practice in laboratory work, their hazards on patient health care and some measures and recommendations to minimize or to eliminate these errors. Recording the encountered laboratory errors during May 2008 and their statistical evaluation (using simple percent distribution) have been done in the department of laboratory of one of the private hospitals in Egypt. Errors have been classified according to the laboratory phases and according to their implication on patient health. Data obtained out of 1,600 testing procedure revealed that the total number of encountered errors is 14 tests (0.87 percent of total testing procedures). Most of the encountered errors lay in the pre- and post-analytic phases of testing cycle (representing 35.7 and 50 percent, respectively, of total errors). While the number of test errors encountered in the analytic phase represented only 14.3 percent of total errors. About 85.7 percent of total errors were of non-significant implication on patients health being detected before test reports have been submitted to the patients. On the other hand, the number of test errors that have been already submitted to patients and reach the physician represented 14.3 percent of total errors. Only 7.1 percent of the errors could have an impact on patient diagnosis. The findings of this study were concomitant with those published from the USA and other countries. This proves that laboratory problems are universal and need general standardization and bench marking measures. Original being the first data published from Arabic countries that

  6. NDE errors and their propagation in sizing and growth estimates

    International Nuclear Information System (INIS)

    Horn, D.; Obrutsky, L.; Lakhan, R.

    2009-01-01

    The accuracy attributed to eddy current flaw sizing determines the amount of conservativism required in setting tube-plugging limits. Several sources of error contribute to the uncertainty of the measurements, and the way in which these errors propagate and interact affects the overall accuracy of the flaw size and flaw growth estimates. An example of this calculation is the determination of an upper limit on flaw growth over one operating period, based on the difference between two measurements. Signal-to-signal comparison involves a variety of human, instrumental, and environmental error sources; of these, some propagate additively and some multiplicatively. In a difference calculation, specific errors in the first measurement may be correlated with the corresponding errors in the second; others may be independent. Each of the error sources needs to be identified and quantified individually, as does its distribution in the field data. A mathematical framework for the propagation of the errors can then be used to assess the sensitivity of the overall uncertainty to each individual error component. This paper quantifies error sources affecting eddy current sizing estimates and presents analytical expressions developed for their effect on depth estimates. A simple case study is used to model the analysis process. For each error source, the distribution of the field data was assessed and propagated through the analytical expressions. While the sizing error obtained was consistent with earlier estimates and with deviations from ultrasonic depth measurements, the error on growth was calculated as significantly smaller than that obtained assuming uncorrelated errors. An interesting result of the sensitivity analysis in the present case study is the quantification of the error reduction available from post-measurement compensation of magnetite effects. With the absolute and difference error equations, variance-covariance matrices, and partial derivatives developed in

  7. Triple-Error-Correcting Codec ASIC

    Science.gov (United States)

    Jones, Robert E.; Segallis, Greg P.; Boyd, Robert

    1994-01-01

    Coder/decoder constructed on single integrated-circuit chip. Handles data in variety of formats at rates up to 300 Mbps, correcting up to 3 errors per data block