WorldWideScience

Sample records for control systematic errors

  1. Systematic Procedural Error

    National Research Council Canada - National Science Library

    Byrne, Michael D

    2006-01-01

    .... This problem has received surprisingly little attention from cognitive psychologists. The research summarized here examines such errors in some detail both empirically and through computational cognitive modeling...

  2. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  3. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  4. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  5. SU-D-BRD-07: Evaluation of the Effectiveness of Statistical Process Control Methods to Detect Systematic Errors For Routine Electron Energy Verification

    International Nuclear Information System (INIS)

    Parker, S

    2015-01-01

    Purpose: To evaluate the ability of statistical process control methods to detect systematic errors when using a two dimensional (2D) detector array for routine electron beam energy verification. Methods: Electron beam energy constancy was measured using an aluminum wedge and a 2D diode array on four linear accelerators. Process control limits were established. Measurements were recorded in control charts and compared with both calculated process control limits and TG-142 recommended specification limits. The data was tested for normality, process capability and process acceptability. Additional measurements were recorded while systematic errors were intentionally introduced. Systematic errors included shifts in the alignment of the wedge, incorrect orientation of the wedge, and incorrect array calibration. Results: Control limits calculated for each beam were smaller than the recommended specification limits. Process capability and process acceptability ratios were greater than one in all cases. All data was normally distributed. Shifts in the alignment of the wedge were most apparent for low energies. The smallest shift (0.5 mm) was detectable using process control limits in some cases, while the largest shift (2 mm) was detectable using specification limits in only one case. The wedge orientation tested did not affect the measurements as this did not affect the thickness of aluminum over the detectors of interest. Array calibration dependence varied with energy and selected array calibration. 6 MeV was the least sensitive to array calibration selection while 16 MeV was the most sensitive. Conclusion: Statistical process control methods demonstrated that the data distribution was normally distributed, the process was capable of meeting specifications, and that the process was centered within the specification limits. Though not all systematic errors were distinguishable from random errors, process control limits increased the ability to detect systematic errors

  6. Control charts for identifying systematic errors using control sera to detect antibody to Salmonella in an indirect ELISA

    DEFF Research Database (Denmark)

    Bak, H.; Barfod, Kristen

    2008-01-01

    This study evaluated the preparation of Shewhart's control charts using the concept of rational subgroups for monitoring the Salmonella antibody ELISA used for surveillance of Danish pig herds. Control charts were prepared for a buffer control sample, a negative serum sample and a positive serum...

  7. Systematic sampling with errors in sample locations

    DEFF Research Database (Denmark)

    Ziegel, Johanna; Baddeley, Adrian; Dorph-Petersen, Karl-Anton

    2010-01-01

    analysis using point process methods. We then analyze three different models for the error process, calculate exact expressions for the variances, and derive asymptotic variances. Errors in the placement of sample points can lead to substantial inflation of the variance, dampening of zitterbewegung......Systematic sampling of points in continuous space is widely used in microscopy and spatial surveys. Classical theory provides asymptotic expressions for the variance of estimators based on systematic sampling as the grid spacing decreases. However, the classical theory assumes that the sample grid...... is exactly periodic; real physical sampling procedures may introduce errors in the placement of the sample points. This paper studies the effect of errors in sample positioning on the variance of estimators in the case of one-dimensional systematic sampling. First we sketch a general approach to variance...

  8. Evaluation of Data with Systematic Errors

    International Nuclear Information System (INIS)

    Froehner, F. H.

    2003-01-01

    Application-oriented evaluated nuclear data libraries such as ENDF and JEFF contain not only recommended values but also uncertainty information in the form of 'covariance' or 'error files'. These can neither be constructed nor utilized properly without a thorough understanding of uncertainties and correlations. It is shown how incomplete information about errors is described by multivariate probability distributions or, more summarily, by covariance matrices, and how correlations are caused by incompletely known common errors. Parameter estimation for the practically most important case of the Gaussian distribution with common errors is developed in close analogy to the more familiar case without. The formalism shows that, contrary to widespread belief, common ('systematic') and uncorrelated ('random' or 'statistical') errors are to be added in quadrature. It also shows explicitly that repetition of a measurement reduces mainly the statistical uncertainties but not the systematic ones. While statistical uncertainties are readily estimated from the scatter of repeatedly measured data, systematic uncertainties can only be inferred from prior information about common errors and their propagation. The optimal way to handle error-affected auxiliary quantities ('nuisance parameters') in data fitting and parameter estimation is to adjust them on the same footing as the parameters of interest and to integrate (marginalize) them out of the joint posterior distribution afterward

  9. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Random and systematic errors in case–control studies calculating the injury risk of driving under the influence of psychoactive substances

    DEFF Research Database (Denmark)

    Houwing, Sjoerd; Hagenzieker, Marjan; Mathijssen, René P.M.

    2013-01-01

    Between 2006 and 2010, six population based case-control studies were conducted as part of the European research-project DRUID (DRiving Under the Influence of Drugs, alcohol and medicines). The aim of these case-control studies was to calculate odds ratios indicating the relative risk of serious....... The list of indicators that was identified in this study is useful both as guidance for systematic reviews and meta-analyses and for future epidemiological studies in the field of driving under the influence to minimize sources of errors already at the start of the study. © 2013 Published by Elsevier Ltd....

  11. Systematic Review of Errors in Inhaler Use

    DEFF Research Database (Denmark)

    Sanchis, Joaquin; Gich, Ignasi; Pedersen, Søren

    2016-01-01

    in these outcomes over these 40 years and when partitioned into years 1 to 20 and years 21 to 40. Analyses were conducted in accordance with recommendations from Preferred Reporting Items for Systematic Reviews and Meta-Analyses and Strengthening the Reporting of Observational Studies in Epidemiology. Results Data...... A systematic search for articles reporting direct observation of inhaler technique by trained personnel covered the period from 1975 to 2014. Outcomes were the nature and frequencies of the three most common errors; the percentage of patients demonstrating correct, acceptable, or poor technique; and variations...

  12. Investigation of systematic errors of metastable "atomic pair" number

    CERN Document Server

    Yazkov, V

    2015-01-01

    Sources of systematic errors in analysis of data, collected in 2012, are analysed. Esti- mations of systematic errors in a number of “atomic pairs” fr om metastable π + π − atoms are presented.

  13. Controlling errors in unidosis carts

    Directory of Open Access Journals (Sweden)

    Inmaculada Díaz Fernández

    2010-01-01

    Full Text Available Objective: To identify errors in the unidosis system carts. Method: For two months, the Pharmacy Service controlled medication either returned or missing from the unidosis carts both in the pharmacy and in the wards. Results: Uncorrected unidosis carts show a 0.9% of medication errors (264 versus 0.6% (154 which appeared in unidosis carts previously revised. In carts not revised, the error is 70.83% and mainly caused when setting up unidosis carts. The rest are due to a lack of stock or unavailability (21.6%, errors in the transcription of medical orders (6.81% or that the boxes had not been emptied previously (0.76%. The errors found in the units correspond to errors in the transcription of the treatment (3.46%, non-receipt of the unidosis copy (23.14%, the patient did not take the medication (14.36%or was discharged without medication (12.77%, was not provided by nurses (14.09%, was withdrawn from the stocks of the unit (14.62%, and errors of the pharmacy service (17.56% . Conclusions: It is concluded the need to redress unidosis carts and a computerized prescription system to avoid errors in transcription.Discussion: A high percentage of medication errors is caused by human error. If unidosis carts are overlooked before sent to hospitalization units, the error diminishes to 0.3%.

  14. Tropical systematic and random error energetics based on NCEP ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    Systematic error growth rate peak is observed at wavenumber 2 up to 4-day forecast then .... the influence of summer systematic error and ran- ... total exchange. When the error energy budgets are examined in spectral domain, one may ask ques- tions on the error growth at a certain wavenum- ber from its interaction with ...

  15. Redundant measurements for controlling errors

    International Nuclear Information System (INIS)

    Ehinger, M.H.; Crawford, J.M.; Madeen, M.L.

    1979-07-01

    Current federal regulations for nuclear materials control require consideration of operating data as part of the quality control program and limits of error propagation. Recent work at the BNFP has revealed that operating data are subject to a number of measurement problems which are very difficult to detect and even more difficult to correct in a timely manner. Thus error estimates based on operational data reflect those problems. During the FY 1978 and FY 1979 R and D demonstration runs at the BNFP, redundant measurement techniques were shown to be effective in detecting these problems to allow corrective action. The net effect is a reduction in measurement errors and a significant increase in measurement sensitivity. Results show that normal operation process control measurements, in conjunction with routine accountability measurements, are sensitive problem indicators when incorporated in a redundant measurement program

  16. Random and Systematic Errors Share in Total Error of Probes for CNC Machine Tools

    Directory of Open Access Journals (Sweden)

    Adam Wozniak

    2018-03-01

    Full Text Available Probes for CNC machine tools, as every measurement device, have accuracy limited by random errors and by systematic errors. Random errors of these probes are described by a parameter called unidirectional repeatability. Manufacturers of probes for CNC machine tools usually specify only this parameter, while parameters describing systematic errors of the probes, such as pre-travel variation or triggering radius variation, are used rarely. Systematic errors of the probes, linked to the differences in pre-travel values for different measurement directions, can be corrected or compensated, but it is not a widely used procedure. In this paper, the share of systematic errors and random errors in total error of exemplary probes are determined. In the case of simple, kinematic probes, systematic errors are much greater than random errors, so compensation would significantly reduce the probing error. Moreover, it shows that in the case of kinematic probes commonly specified unidirectional repeatability is significantly better than 2D performance. However, in the case of more precise strain-gauge probe systematic errors are of the same order as random errors, which means that errors correction or compensation, in this case, would not yield any significant benefits.

  17. SHERPA: A systematic human error reduction and prediction approach

    International Nuclear Information System (INIS)

    Embrey, D.E.

    1986-01-01

    This paper describes a Systematic Human Error Reduction and Prediction Approach (SHERPA) which is intended to provide guidelines for human error reduction and quantification in a wide range of human-machine systems. The approach utilizes as its basic current cognitive models of human performance. The first module in SHERPA performs task and human error analyses, which identify likely error modes, together with guidelines for the reduction of these errors by training, procedures and equipment redesign. The second module uses a SARAH approach to quantify the probability of occurrence of the errors identified earlier, and provides cost benefit analyses to assist in choosing the appropriate error reduction approaches in the third module

  18. Systematic review of ERP and fMRI studies investigating inhibitory control and error processing in people with substance dependence and behavioural addictions

    NARCIS (Netherlands)

    Luijten, M.; Machielsen, M.W.J.; Veltman, D.J.; Hester, R.; de Haan, L.; Franken, I.H.A.

    2014-01-01

    Background: Several current theories emphasize the role of cognitive control in addiction. The present review evaluates neural deficits in the domains of inhibitory control and error processing in individuals with substance dependence and in those showing excessive addiction-like behaviours. The

  19. Systematic review of ERP and fMRI studies investigating inhibitory control and error processing in people with substance dependence and behavioural addictions

    NARCIS (Netherlands)

    Luijten, Maartje; Machielsen, Marise W. J.; Veltman, Dick J.; Hester, Robert; de Haan, Lieuwe; Franken, Ingmar H. A.

    2014-01-01

    Several current theories emphasize the role of cognitive control in addiction. The present review evaluates neural deficits in the domains of inhibitory control and error processing in individuals with substance dependence and in those showing excessive addiction-like behaviours. The combined

  20. Identifying systematic DFT errors in catalytic reactions

    DEFF Research Database (Denmark)

    Christensen, Rune; Hansen, Heine Anton; Vegge, Tejs

    2015-01-01

    Using CO2 reduction reactions as examples, we present a widely applicable method for identifying the main source of errors in density functional theory (DFT) calculations. The method has broad applications for error correction in DFT calculations in general, as it relies on the dependence...... of the applied exchange–correlation functional on the reaction energies rather than on errors versus the experimental data. As a result, improved energy corrections can now be determined for both gas phase and adsorbed reaction species, particularly interesting within heterogeneous catalysis. We show...... that for the CO2 reduction reactions, the main source of error is associated with the C[double bond, length as m-dash]O bonds and not the typically energy corrected OCO backbone....

  1. Systematic review of ERP and fMRI studies investigating inhibitory control and error processing in people with substance dependence and behavioural addictions

    Science.gov (United States)

    Luijten, Maartje; Machielsen, Marise W.J.; Veltman, Dick J.; Hester, Robert; de Haan, Lieuwe; Franken, Ingmar H.A.

    2014-01-01

    Background Several current theories emphasize the role of cognitive control in addiction. The present review evaluates neural deficits in the domains of inhibitory control and error processing in individuals with substance dependence and in those showing excessive addiction-like behaviours. The combined evaluation of event-related potential (ERP) and functional magnetic resonance imaging (fMRI) findings in the present review offers unique information on neural deficits in addicted individuals. Methods We selected 19 ERP and 22 fMRI studies using stop-signal, go/no-go or Flanker paradigms based on a search of PubMed and Embase. Results The most consistent findings in addicted individuals relative to healthy controls were lower N2, error-related negativity and error positivity amplitudes as well as hypoactivation in the anterior cingulate cortex (ACC), inferior frontal gyrus and dorsolateral prefrontal cortex. These neural deficits, however, were not always associated with impaired task performance. With regard to behavioural addictions, some evidence has been found for similar neural deficits; however, studies are scarce and results are not yet conclusive. Differences among the major classes of substances of abuse were identified and involve stronger neural responses to errors in individuals with alcohol dependence versus weaker neural responses to errors in other substance-dependent populations. Limitations Task design and analysis techniques vary across studies, thereby reducing comparability among studies and the potential of clinical use of these measures. Conclusion Current addiction theories were supported by identifying consistent abnormalities in prefrontal brain function in individuals with addiction. An integrative model is proposed, suggesting that neural deficits in the dorsal ACC may constitute a hallmark neurocognitive deficit underlying addictive behaviours, such as loss of control. PMID:24359877

  2. Systematic errors in VLF direction-finding of whistler ducts

    International Nuclear Information System (INIS)

    Strangeways, H.J.; Rycroft, M.J.

    1980-01-01

    In the previous paper it was shown that the systematic error in the azimuthal bearing due to multipath propagation and incident wave polarisation (when this also constitutes an error) was given by only three different forms for all VLF direction-finders currently used to investigate the position of whistler ducts. In this paper the magnitude of this error is investigated for different ionospheric and ground parameters for these three different systematic error types. By incorporating an ionosphere for which the refractive index is given by the full Appleton-Hartree formula, the variation of the systematic error with ionospheric electron density and latitude and direction of propagation is investigated in addition to the variation with wave frequency, ground conductivity and dielectric constant and distance of propagation. The systematic bearing error is also investigated for the three methods when the azimuthal bearing is averaged over a 2 kHz bandwidth. This is found to lead to a significantly smaller bearing error which, for the crossed-loops goniometer, approximates the bearing error calculated when phase-dependent terms in the receiver response are ignored. (author)

  3. Systematic Errors in Dimensional X-ray Computed Tomography

    DEFF Research Database (Denmark)

    that it is possible to compensate them. In dimensional X-ray computed tomography (CT), many physical quantities influence the final result. However, it is important to know which factors in CT measurements potentially lead to systematic errors. In this talk, typical error sources in dimensional X-ray CT are discussed...

  4. Tackling systematic errors in quantum logic gates with composite rotations

    International Nuclear Information System (INIS)

    Cummins, Holly K.; Llewellyn, Gavin; Jones, Jonathan A.

    2003-01-01

    We describe the use of composite rotations to combat systematic errors in single-qubit quantum logic gates and discuss three families of composite rotations which can be used to correct off-resonance and pulse length errors. Although developed and described within the context of nuclear magnetic resonance quantum computing, these sequences should be applicable to any implementation of quantum computation

  5. ANALYSIS AND CORRECTION OF SYSTEMATIC HEIGHT MODEL ERRORS

    Directory of Open Access Journals (Sweden)

    K. Jacobsen

    2016-06-01

    Full Text Available The geometry of digital height models (DHM determined with optical satellite stereo combinations depends upon the image orientation, influenced by the satellite camera, the system calibration and attitude registration. As standard these days the image orientation is available in form of rational polynomial coefficients (RPC. Usually a bias correction of the RPC based on ground control points is required. In most cases the bias correction requires affine transformation, sometimes only shifts, in image or object space. For some satellites and some cases, as caused by small base length, such an image orientation does not lead to the possible accuracy of height models. As reported e.g. by Yong-hua et al. 2015 and Zhang et al. 2015, especially the Chinese stereo satellite ZiYuan-3 (ZY-3 has a limited calibration accuracy and just an attitude recording of 4 Hz which may not be satisfying. Zhang et al. 2015 tried to improve the attitude based on the color sensor bands of ZY-3, but the color images are not always available as also detailed satellite orientation information. There is a tendency of systematic deformation at a Pléiades tri-stereo combination with small base length. The small base length enlarges small systematic errors to object space. But also in some other satellite stereo combinations systematic height model errors have been detected. The largest influence is the not satisfying leveling of height models, but also low frequency height deformations can be seen. A tilt of the DHM by theory can be eliminated by ground control points (GCP, but often the GCP accuracy and distribution is not optimal, not allowing a correct leveling of the height model. In addition a model deformation at GCP locations may lead to not optimal DHM leveling. Supported by reference height models better accuracy has been reached. As reference height model the Shuttle Radar Topography Mission (SRTM digital surface model (DSM or the new AW3D30 DSM, based on ALOS

  6. RHIC susceptibility to variations in systematic magnetic harmonic errors

    International Nuclear Information System (INIS)

    Dell, G.F.; Peggs, S.; Pilat, F.; Satogata, T.; Tepikian, S.; Trbojevic, D.; Wei, J.

    1994-01-01

    Results of a study to determine the sensitivity of tune to uncertainties of the systematic magnetic harmonic errors in the 8 cm dipoles of RHIC are reported. Tolerances specified to the manufacturer for tooling and fabrication can result in systematic harmonics different from the expected values. Limits on the range of systematic harmonics have been established from magnet calculations, and the impact on tune from such harmonics has been established

  7. Correcting systematic errors in high-sensitivity deuteron polarization measurements

    Science.gov (United States)

    Brantjes, N. P. M.; Dzordzhadze, V.; Gebel, R.; Gonnella, F.; Gray, F. E.; van der Hoek, D. J.; Imig, A.; Kruithof, W. L.; Lazarus, D. M.; Lehrach, A.; Lorentz, B.; Messi, R.; Moricciani, D.; Morse, W. M.; Noid, G. A.; Onderwater, C. J. G.; Özben, C. S.; Prasuhn, D.; Levi Sandri, P.; Semertzidis, Y. K.; da Silva e Silva, M.; Stephenson, E. J.; Stockhorst, H.; Venanzoni, G.; Versolato, O. O.

    2012-02-01

    This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Jülich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10 -5 for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10 -6 in a search for an electric dipole moment using a storage ring.

  8. Sources of variability and systematic error in mouse timing behavior.

    Science.gov (United States)

    Gallistel, C R; King, Adam; McDonald, Robert

    2004-01-01

    In the peak procedure, starts and stops in responding bracket the target time at which food is expected. The variability in start and stop times is proportional to the target time (scalar variability), as is the systematic error in the mean center (scalar error). The authors investigated the source of the error and the variability, using head poking in the mouse, with target intervals of 5 s, 15 s, and 45 s, in the standard procedure, and in a variant with 3 different target intervals at 3 different locations in a single trial. The authors conclude that the systematic error is due to the asymmetric location of start and stop decision criteria, and the scalar variability derives primarily from sources other than memory.

  9. Correcting systematic errors in high-sensitivity deuteron polarization measurements

    Energy Technology Data Exchange (ETDEWEB)

    Brantjes, N.P.M. [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Dzordzhadze, V. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Gebel, R. [Institut fuer Kernphysik, Juelich Center for Hadron Physics, Forschungszentrum Juelich, D-52425 Juelich (Germany); Gonnella, F. [Physica Department of ' Tor Vergata' University, Rome (Italy); INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Gray, F.E. [Regis University, Denver, CO 80221 (United States); Hoek, D.J. van der [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Imig, A. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Kruithof, W.L. [Kernfysisch Versneller Instituut, University of Groningen, NL-9747AA Groningen (Netherlands); Lazarus, D.M. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Lehrach, A.; Lorentz, B. [Institut fuer Kernphysik, Juelich Center for Hadron Physics, Forschungszentrum Juelich, D-52425 Juelich (Germany); Messi, R. [Physica Department of ' Tor Vergata' University, Rome (Italy); INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Moricciani, D. [INFN-Sez. ' Roma tor Vergata,' Rome (Italy); Morse, W.M. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Noid, G.A. [Indiana University Cyclotron Facility, Bloomington, IN 47408 (United States); and others

    2012-02-01

    This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Juelich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10{sup -5} for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10{sup -6} in a search for an electric dipole moment using a storage ring.

  10. Auto-calibration of Systematic Odometry Errors in Mobile Robots

    DEFF Research Database (Denmark)

    Bak, Martin; Larsen, Thomas Dall; Andersen, Nils Axel

    1999-01-01

    This paper describes the phenomenon of systematic errors in odometry models in mobile robots and looks at various ways of avoiding it by means of auto-calibration. The systematic errors considered are incorrect knowledge of the wheel base and the gains from encoder readings to wheel displacement....... By auto-calibration we mean a standardized procedure which estimates the uncertainties using only on-board equipment such as encoders, an absolute measurement system and filters; no intervention by operator or off-line data processing is necessary. Results are illustrated by a number of simulations...... and experiments on a mobile robot....

  11. Saccades to remembered target locations: an analysis of systematic and variable errors.

    Science.gov (United States)

    White, J M; Sparks, D L; Stanford, T R

    1994-01-01

    We studied the effects of varying delay interval on the accuracy and velocity of saccades to the remembered locations of visual targets. Remembered saccades were less accurate than control saccades. Both systematic and variable errors contributed to the loss of accuracy. Systematic errors were similar in size for delay intervals ranging from 400 msec to 5.6 sec, but variable errors increased monotonically as delay intervals were lengthened. Compared to control saccades, remembered saccades were slower and the peak velocities were more variable. However, neither peak velocity nor variability in peak velocity was related to the duration of the delay interval. Our findings indicate that a memory-related process is not the major source of the systematic errors observed on memory trials.

  12. Study of systematic errors in the luminosity measurement

    International Nuclear Information System (INIS)

    Arima, Tatsumi

    1993-01-01

    The experimental systematic error in the barrel region was estimated to be 0.44 %. This value is derived considering the systematic uncertainties from the dominant sources but does not include uncertainties which are being studied. In the end cap region, the study of shower behavior and clustering effect is under way in order to determine the angular resolution at the low angle edge of the Liquid Argon Calorimeter. We also expect that the systematic error in this region will be less than 1 %. The technical precision of theoretical uncertainty is better than 0.1 % comparing the Tobimatsu-Shimizu program and BABAMC modified by ALEPH. To estimate the physical uncertainty we will use the ALIBABA [9] which includes O(α 2 ) QED correction in leading-log approximation. (J.P.N.)

  13. Study of systematic errors in the luminosity measurement

    Energy Technology Data Exchange (ETDEWEB)

    Arima, Tatsumi [Tsukuba Univ., Ibaraki (Japan). Inst. of Applied Physics

    1993-04-01

    The experimental systematic error in the barrel region was estimated to be 0.44 %. This value is derived considering the systematic uncertainties from the dominant sources but does not include uncertainties which are being studied. In the end cap region, the study of shower behavior and clustering effect is under way in order to determine the angular resolution at the low angle edge of the Liquid Argon Calorimeter. We also expect that the systematic error in this region will be less than 1 %. The technical precision of theoretical uncertainty is better than 0.1 % comparing the Tobimatsu-Shimizu program and BABAMC modified by ALEPH. To estimate the physical uncertainty we will use the ALIBABA [9] which includes O({alpha}{sup 2}) QED correction in leading-log approximation. (J.P.N.).

  14. Medication Errors in the Southeast Asian Countries: A Systematic Review.

    Directory of Open Access Journals (Sweden)

    Shahrzad Salmasi

    Full Text Available Medication error (ME is a worldwide issue, but most studies on ME have been undertaken in developed countries and very little is known about ME in Southeast Asian countries. This study aimed systematically to identify and review research done on ME in Southeast Asian countries in order to identify common types of ME and estimate its prevalence in this region.The literature relating to MEs in Southeast Asian countries was systematically reviewed in December 2014 by using; Embase, Medline, Pubmed, ProQuest Central and the CINAHL. Inclusion criteria were studies (in any languages that investigated the incidence and the contributing factors of ME in patients of all ages.The 17 included studies reported data from six of the eleven Southeast Asian countries: five studies in Singapore, four in Malaysia, three in Thailand, three in Vietnam, one in the Philippines and one in Indonesia. There was no data on MEs in Brunei, Laos, Cambodia, Myanmar and Timor. Of the seventeen included studies, eleven measured administration errors, four focused on prescribing errors, three were done on preparation errors, three on dispensing errors and two on transcribing errors. There was only one study of reconciliation error. Three studies were interventional.The most frequently reported types of administration error were incorrect time, omission error and incorrect dose. Staff shortages, and hence heavy workload for nurses, doctor/nurse distraction, and misinterpretation of the prescription/medication chart, were identified as contributing factors of ME. There is a serious lack of studies on this topic in this region which needs to be addressed if the issue of ME is to be fully understood and addressed.

  15. ILRS Activities in Monitoring Systematic Errors in SLR Data

    Science.gov (United States)

    Pavlis, E. C.; Luceri, V.; Kuzmicz-Cieslak, M.; Bianco, G.

    2017-12-01

    The International Laser Ranging Service (ILRS) contributes to ITRF development unique information that only Satellite Laser Ranging—SLR is sensitive to: the definition of the origin, and in equal parts with VLBI, the scale of the model. For the development of ITRF2014, the ILRS analysts adopted a revision of the internal standards and procedures in generating our contribution from the eight ILRS Analysis Centers. The improved results for the ILRS components were reflected in the resulting new time series of the ITRF origin and scale, showing insignificant trends and tighter scatter. This effort was further extended after the release of ITRF2014, with the execution of a Pilot Project (PP) in the 2016-2017 timeframe that demonstrated the robust estimation of persistent systematic errors at the millimeter level. ILRS ASC is now turning this into an operational tool to monitor station performance and to generate a history of systematics at each station, to be used with each re-analysis for future ITRF model developments. This is part of a broader ILRS effort to improve the quality control of the data collection process as well as that of our products. To this end, the ILRS has established a "Quality Control Board—QCB" that comprises of members from the analysis and engineering groups, the Central Bureau, and even user groups with special interests. The QCB meets by telecon monthly and oversees the various ongoing projects, develops ideas for new tools and future products. This presentation will focus on the main topic with an update on the results so far, the schedule for the near future and its operational implementation, along with a brief description of upcoming new ILRS products.

  16. Reducing systematic errors in measurements made by a SQUID magnetometer

    International Nuclear Information System (INIS)

    Kiss, L.F.; Kaptás, D.; Balogh, J.

    2014-01-01

    A simple method is described which reduces those systematic errors of a superconducting quantum interference device (SQUID) magnetometer that arise from possible radial displacements of the sample in the second-order gradiometer superconducting pickup coil. By rotating the sample rod (and hence the sample) around its axis into a position where the best fit is obtained to the output voltage of the SQUID as the sample is moved through the pickup coil, the accuracy of measuring magnetic moments can be increased significantly. In the cases of an examined Co 1.9 Fe 1.1 Si Heusler alloy, pure iron and nickel samples, the accuracy could be increased over the value given in the specification of the device. The suggested method is only meaningful if the measurement uncertainty is dominated by systematic errors – radial displacement in particular – and not by instrumental or environmental noise. - Highlights: • A simple method is described which reduces systematic errors of a SQUID. • The errors arise from a radial displacement of the sample in the gradiometer coil. • The procedure is to rotate the sample rod (with the sample) around its axis. • The best fit to the SQUID voltage has to be attained moving the sample through the coil. • The accuracy of measuring magnetic moment can be increased significantly

  17. Systematic investigation of SLC final focus tolerances to errors

    International Nuclear Information System (INIS)

    Napoly, O.

    1996-10-01

    In this paper we review the tolerances of the SLC final focus system. To calculate these tolerances we used the error analysis routine of the program FFADA which has been written to aid the design and the analysis of final focus systems for the future linear colliders. This routine, complete by S. Fartoukh, systematically reviews the errors generated by the geometric 6-d Euclidean displacements of each magnet as well as by the field errors (normal and skew) up to the sextipolar order. It calculates their effects on the orbit and the transfer matrix at the second order in the errors, thus including cross-talk between errors originating from two different magnets. It also translates these effects in terms of tolerance derived from spot size growth and luminosity loss. We have run the routine for the following set of beam IP parameters: σ * x = 2.1 μm; σ * x' = 300 μrd; σ * x = 1 mm; σ * y = 0.55 μm; σ * y' = 200 μrd; σ * b = 2 x 10 -3 . The resulting errors and tolerances are displayed in a series of histograms which are reproduced in this paper. (author)

  18. On systematic and statistic errors in radionuclide mass activity estimation procedure

    International Nuclear Information System (INIS)

    Smelcerovic, M.; Djuric, G.; Popovic, D.

    1989-01-01

    One of the most important requirements during nuclear accidents is the fast estimation of the mass activity of the radionuclides that suddenly and without control reach the environment. The paper points to systematic errors in the procedures of sampling, sample preparation and measurement itself, that in high degree contribute to total mass activity evaluation error. Statistic errors in gamma spectrometry as well as in total mass alpha and beta activity evaluation are also discussed. Beside, some of the possible sources of errors in the partial mass activity evaluation for some of the radionuclides are presented. The contribution of the errors in the total mass activity evaluation error is estimated and procedures that could possibly reduce it are discussed (author)

  19. ac driving amplitude dependent systematic error in scanning Kelvin probe microscope measurements: Detection and correction

    International Nuclear Information System (INIS)

    Wu Yan; Shannon, Mark A.

    2006-01-01

    The dependence of the contact potential difference (CPD) reading on the ac driving amplitude in scanning Kelvin probe microscope (SKPM) hinders researchers from quantifying true material properties. We show theoretically and demonstrate experimentally that an ac driving amplitude dependence in the SKPM measurement can come from a systematic error, and it is common for all tip sample systems as long as there is a nonzero tracking error in the feedback control loop of the instrument. We further propose a methodology to detect and to correct the ac driving amplitude dependent systematic error in SKPM measurements. The true contact potential difference can be found by applying a linear regression to the measured CPD versus one over ac driving amplitude data. Two scenarios are studied: (a) when the surface being scanned by SKPM is not semiconducting and there is an ac driving amplitude dependent systematic error; (b) when a semiconductor surface is probed and asymmetric band bending occurs when the systematic error is present. Experiments are conducted using a commercial SKPM and CPD measurement results of two systems: platinum-iridium/gap/gold and platinum-iridium/gap/thermal oxide/silicon are discussed

  20. Physical predictions from lattice QCD. Reducing systematic errors

    International Nuclear Information System (INIS)

    Pittori, C.

    1994-01-01

    Some recent developments in the theoretical understanding of lattice quantum chromodynamics and of its possible sources of systematic errors are reported, and a review of some of the latest Monte Carlo results for light quarks phenomenology is presented. A very general introduction on a quantum field theory on a discrete spacetime lattice is given, and the Monte Carlo methods which allow to compute many interesting physical quantities in the non-perturbative domain of strong interactions, is illustrated. (author). 17 refs., 3 figs., 3 tabs

  1. Relaxed error control in shape optimization that utilizes remeshing

    CSIR Research Space (South Africa)

    Wilke, DN

    2013-02-01

    Full Text Available Shape optimization strategies based on error indicators usually require strict error control for every computed design during the optimization run. The strict error control serves two purposes. Firstly, it allows for the accurate computation...

  2. Recognition Errors Control in Biometric Identification Cryptosystems

    Directory of Open Access Journals (Sweden)

    Vladimir Ivanovich Vasilyev

    2015-06-01

    Full Text Available The method of biometric cryptosystem designed on the basis of fuzzy extractor, in which main disadvantages of biometric and cryptographic systems are absent, is considered. The main idea of this work is a control of identity recognition errors with use of fuzzy extractor which operates with Reed – Solomon correcting code. The fingerprint features vector is considered as a biometric user identifier.

  3. Error evaluation method for material accountancy measurement. Evaluation of random and systematic errors based on material accountancy data

    International Nuclear Information System (INIS)

    Nidaira, Kazuo

    2008-01-01

    International Target Values (ITV) shows random and systematic measurement uncertainty components as a reference for routinely achievable measurement quality in the accountancy measurement. The measurement uncertainty, called error henceforth, needs to be periodically evaluated and checked against ITV for consistency as the error varies according to measurement methods, instruments, operators, certified reference samples, frequency of calibration, and so on. In the paper an error evaluation method was developed with focuses on (1) Specifying clearly error calculation model, (2) Getting always positive random and systematic error variances, (3) Obtaining probability density distribution of an error variance and (4) Confirming the evaluation method by simulation. In addition the method was demonstrated by applying real data. (author)

  4. Systematic literature review of hospital medication administration errors in children

    Directory of Open Access Journals (Sweden)

    Ameer A

    2015-11-01

    Full Text Available Ahmed Ameer,1 Soraya Dhillon,1 Mark J Peters,2 Maisoon Ghaleb11Department of Pharmacy, School of Life and Medical Sciences, University of Hertfordshire, Hatfield, UK; 2Paediatric Intensive Care Unit, Great Ormond Street Hospital, London, UK Objective: Medication administration is the last step in the medication process. It can act as a safety net to prevent unintended harm to patients if detected. However, medication administration errors (MAEs during this process have been documented and thought to be preventable. In pediatric medicine, doses are usually administered based on the child's weight or body surface area. This in turn increases the risk of drug miscalculations and therefore MAEs. The aim of this review is to report MAEs occurring in pediatric inpatients. Methods: Twelve bibliographic databases were searched for studies published between January 2000 and February 2015 using “medication administration errors”, “hospital”, and “children” related terminologies. Handsearching of relevant publications was also carried out. A second reviewer screened articles for eligibility and quality in accordance with the inclusion/exclusion criteria. Key findings: A total of 44 studies were systematically reviewed. MAEs were generally defined as a deviation of dose given from that prescribed; this included omitted doses and administration at the wrong time. Hospital MAEs in children accounted for a mean of 50% of all reported medication error reports (n=12,588. It was also identified in a mean of 29% of doses observed (n=8,894. The most prevalent type of MAEs related to preparation, infusion rate, dose, and time. This review has identified five types of interventions to reduce hospital MAEs in children: barcode medicine administration, electronic prescribing, education, use of smart pumps, and standard concentration. Conclusion: This review has identified a wide variation in the prevalence of hospital MAEs in children. This is attributed to

  5. Prevalence and reporting of recruitment, randomisation and treatment errors in clinical trials: A systematic review.

    Science.gov (United States)

    Yelland, Lisa N; Kahan, Brennan C; Dent, Elsa; Lee, Katherine J; Voysey, Merryn; Forbes, Andrew B; Cook, Jonathan A

    2018-06-01

    Background/aims In clinical trials, it is not unusual for errors to occur during the process of recruiting, randomising and providing treatment to participants. For example, an ineligible participant may inadvertently be randomised, a participant may be randomised in the incorrect stratum, a participant may be randomised multiple times when only a single randomisation is permitted or the incorrect treatment may inadvertently be issued to a participant at randomisation. Such errors have the potential to introduce bias into treatment effect estimates and affect the validity of the trial, yet there is little motivation for researchers to report these errors and it is unclear how often they occur. The aim of this study is to assess the prevalence of recruitment, randomisation and treatment errors and review current approaches for reporting these errors in trials published in leading medical journals. Methods We conducted a systematic review of individually randomised, phase III, randomised controlled trials published in New England Journal of Medicine, Lancet, Journal of the American Medical Association, Annals of Internal Medicine and British Medical Journal from January to March 2015. The number and type of recruitment, randomisation and treatment errors that were reported and how they were handled were recorded. The corresponding authors were contacted for a random sample of trials included in the review and asked to provide details on unreported errors that occurred during their trial. Results We identified 241 potentially eligible articles, of which 82 met the inclusion criteria and were included in the review. These trials involved a median of 24 centres and 650 participants, and 87% involved two treatment arms. Recruitment, randomisation or treatment errors were reported in 32 in 82 trials (39%) that had a median of eight errors. The most commonly reported error was ineligible participants inadvertently being randomised. No mention of recruitment, randomisation

  6. IceCube systematic errors investigation: Simulation of the ice

    Energy Technology Data Exchange (ETDEWEB)

    Resconi, Elisa; Wolf, Martin [Max-Planck-Institute for Nuclear Physics, Heidelberg (Germany); Schukraft, Anne [RWTH, Aachen University (Germany)

    2010-07-01

    IceCube is a neutrino observatory for astroparticle and astronomy research at the South Pole. It uses one cubic kilometer of Antartica's deepest ice (1500 m-2500 m in depth) to detect Cherenkov light, generated by charged particles traveling through the ice, with an array of phototubes encapsulated in glass pressure spheres. The arrival time as well as the charge deposited of the detected photons represent the base measurements that are used for track and energy reconstruction of those charged particles. The optical properties of the deep antarctic ice vary from layer to layer. Measurements of the ice properties and their correct modeling in Monte Carlo simulation is then of primary importance for the correct understanding of the IceCube telescope behavior. After a short summary about the different methods to investigate the ice properties and to calibrate the detector, we show how the simulation obtained by using this information compares to the measured data and how systematic errors due to uncertain ice properties are determined in IceCube.

  7. Black hole spectroscopy: Systematic errors and ringdown energy estimates

    Science.gov (United States)

    Baibhav, Vishal; Berti, Emanuele; Cardoso, Vitor; Khanna, Gaurav

    2018-02-01

    The relaxation of a distorted black hole to its final state provides important tests of general relativity within the reach of current and upcoming gravitational wave facilities. In black hole perturbation theory, this phase consists of a simple linear superposition of exponentially damped sinusoids (the quasinormal modes) and of a power-law tail. How many quasinormal modes are necessary to describe waveforms with a prescribed precision? What error do we incur by only including quasinormal modes, and not tails? What other systematic effects are present in current state-of-the-art numerical waveforms? These issues, which are basic to testing fundamental physics with distorted black holes, have hardly been addressed in the literature. We use numerical relativity waveforms and accurate evolutions within black hole perturbation theory to provide some answers. We show that (i) a determination of the fundamental l =m =2 quasinormal frequencies and damping times to within 1% or better requires the inclusion of at least the first overtone, and preferably of the first two or three overtones; (ii) a determination of the black hole mass and spin with precision better than 1% requires the inclusion of at least two quasinormal modes for any given angular harmonic mode (ℓ , m ). We also improve on previous estimates and fits for the ringdown energy radiated in the various multipoles. These results are important to quantify theoretical (as opposed to instrumental) limits in parameter estimation accuracy and tests of general relativity allowed by ringdown measurements with high signal-to-noise ratio gravitational wave detectors.

  8. Systematic Error of Acoustic Particle Image Velocimetry and Its Correction

    Directory of Open Access Journals (Sweden)

    Mickiewicz Witold

    2014-08-01

    Full Text Available Particle Image Velocimetry is getting more and more often the method of choice not only for visualization of turbulent mass flows in fluid mechanics, but also in linear and non-linear acoustics for non-intrusive visualization of acoustic particle velocity. Particle Image Velocimetry with low sampling rate (about 15Hz can be applied to visualize the acoustic field using the acquisition synchronized to the excitation signal. Such phase-locked PIV technique is described and used in experiments presented in the paper. The main goal of research was to propose a model of PIV systematic error due to non-zero time interval between acquisitions of two images of the examined sound field seeded with tracer particles, what affects the measurement of complex acoustic signals. Usefulness of the presented model is confirmed experimentally. The correction procedure, based on the proposed model, applied to measurement data increases the accuracy of acoustic particle velocity field visualization and creates new possibilities in observation of sound fields excited with multi-tonal or band-limited noise signals.

  9. On the Source of the Systematic Errors in the Quatum Mechanical Calculation of the Superheavy Elements

    Directory of Open Access Journals (Sweden)

    Khazan A.

    2010-10-01

    Full Text Available It is shown that only the hyperbolic law of the Periodic Table of Elements allows the exact calculation for the atomic masses. The reference data of Periods 8 and 9 manifest a systematic error in the computer software applied to such a calculation (this systematic error increases with the number of the elements in the Table.

  10. On the Source of the Systematic Errors in the Quantum Mechanical Calculation of the Superheavy Elements

    Directory of Open Access Journals (Sweden)

    Khazan A.

    2010-10-01

    Full Text Available It is shown that only the hyperbolic law of the Periodic Table of Elements allows the exact calculation for the atomic masses. The reference data of Periods 8 and 9 manifest a systematic error in the computer software applied to such a calculation (this systematic error increases with the number of the elements in the Table.

  11. Assessment of the uncertainty associated with systematic errors in digital instruments: an experimental study on offset errors

    International Nuclear Information System (INIS)

    Attivissimo, F; Giaquinto, N; Savino, M; Cataldo, A

    2012-01-01

    This paper deals with the assessment of the uncertainty due to systematic errors, particularly in A/D conversion-based instruments. The problem of defining and assessing systematic errors is briefly discussed, and the conceptual scheme of gauge repeatability and reproducibility is adopted. A practical example regarding the evaluation of the uncertainty caused by the systematic offset error is presented. The experimental results, obtained under various ambient conditions, show that modelling the variability of systematic errors is more problematic than suggested by the ISO 5725 norm. Additionally, the paper demonstrates the substantial difference between the type B uncertainty evaluation, obtained via the maximum entropy principle applied to manufacturer's specifications, and the type A (experimental) uncertainty evaluation, which reflects actually observable reality. Although it is reasonable to assume a uniform distribution of the offset error, experiments demonstrate that the distribution is not centred and that a correction must be applied. In such a context, this work motivates a more pragmatic and experimental approach to uncertainty, with respect to the directions of supplement 1 of GUM. (paper)

  12. Error-controlled adaptive finite elements in solid mechanics

    National Research Council Canada - National Science Library

    Stein, Erwin; Ramm, E

    2003-01-01

    ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Error-controlled Adaptive Finite-element-methods . . . . . . . . . . . . Missing Features and Properties of Today's General Purpose FE Programs for Structural...

  13. Seeing your error alters my pointing: observing systematic pointing errors induces sensori-motor after-effects.

    Directory of Open Access Journals (Sweden)

    Roberta Ronchi

    Full Text Available During the procedure of prism adaptation, subjects execute pointing movements to visual targets under a lateral optical displacement: as consequence of the discrepancy between visual and proprioceptive inputs, their visuo-motor activity is characterized by pointing errors. The perception of such final errors triggers error-correction processes that eventually result into sensori-motor compensation, opposite to the prismatic displacement (i.e., after-effects. Here we tested whether the mere observation of erroneous pointing movements, similar to those executed during prism adaptation, is sufficient to produce adaptation-like after-effects. Neurotypical participants observed, from a first-person perspective, the examiner's arm making incorrect pointing movements that systematically overshot visual targets location to the right, thus simulating a rightward optical deviation. Three classical after-effect measures (proprioceptive, visual and visual-proprioceptive shift were recorded before and after first-person's perspective observation of pointing errors. Results showed that mere visual exposure to an arm that systematically points on the right-side of a target (i.e., without error correction produces a leftward after-effect, which mostly affects the observer's proprioceptive estimation of her body midline. In addition, being exposed to such a constant visual error induced in the observer the illusion "to feel" the seen movement. These findings indicate that it is possible to elicit sensori-motor after-effects by mere observation of movement errors.

  14. The systematic and random errors determination using realtime 3D surface tracking system in breast cancer

    International Nuclear Information System (INIS)

    Kanphet, J; Suriyapee, S; Sanghangthum, T; Kumkhwao, J; Wisetrintong, M; Dumrongkijudom, N

    2016-01-01

    The purpose of this study to determine the patient setup uncertainties in deep inspiration breath-hold (DIBH) radiation therapy for left breast cancer patients using real-time 3D surface tracking system. The six breast cancer patients treated by 6 MV photon beams from TrueBeam linear accelerator were selected. The patient setup errors and motion during treatment were observed and calculated for interfraction and intrafraction motions. The systematic and random errors were calculated in vertical, longitudinal and lateral directions. From 180 images tracking before and during treatment, the maximum systematic error of interfraction and intrafraction motions were 0.56 mm and 0.23 mm, the maximum random error of interfraction and intrafraction motions were 1.18 mm and 0.53 mm, respectively. The interfraction was more pronounce than the intrafraction, while the systematic error was less impact than random error. In conclusion the intrafraction motion error from patient setup uncertainty is about half of interfraction motion error, which is less impact due to the stability in organ movement from DIBH. The systematic reproducibility is also half of random error because of the high efficiency of modern linac machine that can reduce the systematic uncertainty effectively, while the random errors is uncontrollable. (paper)

  15. Numerical study of the systematic error in Monte Carlo schemes for semiconductors

    Energy Technology Data Exchange (ETDEWEB)

    Muscato, Orazio [Univ. degli Studi di Catania (Italy). Dipt. di Matematica e Informatica; Di Stefano, Vincenza [Univ. degli Studi di Messina (Italy). Dipt. di Matematica; Wagner, Wolfgang [Weierstrass-Institut fuer Angewandte Analysis und Stochastik (WIAS) im Forschungsverbund Berlin e.V. (Germany)

    2008-07-01

    The paper studies the convergence behavior of Monte Carlo schemes for semiconductors. A detailed analysis of the systematic error with respect to numerical parameters is performed. Different sources of systematic error are pointed out and illustrated in a spatially one-dimensional test case. The error with respect to the number of simulation particles occurs during the calculation of the internal electric field. The time step error, which is related to the splitting of transport and electric field calculations, vanishes sufficiently fast. The error due to the approximation of the trajectories of particles depends on the ODE solver used in the algorithm. It is negligible compared to the other sources of time step error, when a second order Runge-Kutta solver is used. The error related to the approximate scattering mechanism is the most significant source of error with respect to the time step. (orig.)

  16. The quality of systematic reviews about interventions for refractive error can be improved: a review of systematic reviews.

    Science.gov (United States)

    Mayo-Wilson, Evan; Ng, Sueko Matsumura; Chuck, Roy S; Li, Tianjing

    2017-09-05

    Systematic reviews should inform American Academy of Ophthalmology (AAO) Preferred Practice Pattern® (PPP) guidelines. The quality of systematic reviews related to the forthcoming Preferred Practice Pattern® guideline (PPP) Refractive Errors & Refractive Surgery is unknown. We sought to identify reliable systematic reviews to assist the AAO Refractive Errors & Refractive Surgery PPP. Systematic reviews were eligible if they evaluated the effectiveness or safety of interventions included in the 2012 PPP Refractive Errors & Refractive Surgery. To identify potentially eligible systematic reviews, we searched the Cochrane Eyes and Vision United States Satellite database of systematic reviews. Two authors identified eligible reviews and abstracted information about the characteristics and quality of the reviews independently using the Systematic Review Data Repository. We classified systematic reviews as "reliable" when they (1) defined criteria for the selection of studies, (2) conducted comprehensive literature searches for eligible studies, (3) assessed the methodological quality (risk of bias) of the included studies, (4) used appropriate methods for meta-analyses (which we assessed only when meta-analyses were reported), (5) presented conclusions that were supported by the evidence provided in the review. We identified 124 systematic reviews related to refractive error; 39 met our eligibility criteria, of which we classified 11 to be reliable. Systematic reviews classified as unreliable did not define the criteria for selecting studies (5; 13%), did not assess methodological rigor (10; 26%), did not conduct comprehensive searches (17; 44%), or used inappropriate quantitative methods (3; 8%). The 11 reliable reviews were published between 2002 and 2016. They included 0 to 23 studies (median = 9) and analyzed 0 to 4696 participants (median = 666). Seven reliable reviews (64%) assessed surgical interventions. Most systematic reviews of interventions for

  17. Economic impact of medication error: a systematic review.

    Science.gov (United States)

    Walsh, Elaine K; Hansen, Christina Raae; Sahm, Laura J; Kearney, Patricia M; Doherty, Edel; Bradley, Colin P

    2017-05-01

    Medication error is a significant source of morbidity and mortality among patients. Clinical and cost-effectiveness evidence are required for the implementation of quality of care interventions. Reduction of error-related cost is a key potential benefit of interventions addressing medication error. The aim of this review was to describe and quantify the economic burden associated with medication error. PubMed, Cochrane, Embase, CINAHL, EconLit, ABI/INFORM, Business Source Complete were searched. Studies published 2004-2016 assessing the economic impact of medication error were included. Cost values were expressed in Euro 2015. A narrative synthesis was performed. A total of 4572 articles were identified from database searching, and 16 were included in the review. One study met all applicable quality criteria. Fifteen studies expressed economic impact in monetary terms. Mean cost per error per study ranged from €2.58 to €111 727.08. Healthcare costs were used to measure economic impact in 15 of the included studies with one study measuring litigation costs. Four studies included costs incurred in primary care with the remaining 12 measuring hospital costs. Five studies looked at general medication error in a general population with 11 studies reporting the economic impact of an individual type of medication error or error within a specific patient population. Considerable variability existed between studies in terms of financial cost, patients, settings and errors included. Many were of poor quality. Assessment of economic impact was conducted predominantly in the hospital setting with little assessment of primary care impact. Limited parameters were used to establish economic impact. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  18. Multi-isocenter stereotactic radiotherapy: implications for target dose distributions of systematic and random localization errors

    International Nuclear Information System (INIS)

    Ebert, M.A.; Zavgorodni, S.F.; Kendrick, L.A.; Weston, S.; Harper, C.S.

    2001-01-01

    Purpose: This investigation examined the effect of alignment and localization errors on dose distributions in stereotactic radiotherapy (SRT) with arced circular fields. In particular, it was desired to determine the effect of systematic and random localization errors on multi-isocenter treatments. Methods and Materials: A research version of the FastPlan system from Surgical Navigation Technologies was used to generate a series of SRT plans of varying complexity. These plans were used to examine the influence of random setup errors by recalculating dose distributions with successive setup errors convolved into the off-axis ratio data tables used in the dose calculation. The influence of systematic errors was investigated by displacing isocenters from their planned positions. Results: For single-isocenter plans, it is found that the influences of setup error are strongly dependent on the size of the target volume, with minimum doses decreasing most significantly with increasing random and systematic alignment error. For multi-isocenter plans, similar variations in target dose are encountered, with this result benefiting from the conventional method of prescribing to a lower isodose value for multi-isocenter treatments relative to single-isocenter treatments. Conclusions: It is recommended that the systematic errors associated with target localization in SRT be tracked via a thorough quality assurance program, and that random setup errors be minimized by use of a sufficiently robust relocation system. These errors should also be accounted for by incorporating corrections into the treatment planning algorithm or, alternatively, by inclusion of sufficient margins in target definition

  19. Benefits and risks of using smart pumps to reduce medication error rates: a systematic review.

    Science.gov (United States)

    Ohashi, Kumiko; Dalleur, Olivia; Dykes, Patricia C; Bates, David W

    2014-12-01

    Smart infusion pumps have been introduced to prevent medication errors and have been widely adopted nationally in the USA, though they are not always used in Europe or other regions. Despite widespread usage of smart pumps, intravenous medication errors have not been fully eliminated. Through a systematic review of recent studies and reports regarding smart pump implementation and use, we aimed to identify the impact of smart pumps on error reduction and on the complex process of medication administration, and strategies to maximize the benefits of smart pumps. The medical literature related to the effects of smart pumps for improving patient safety was searched in PUBMED, EMBASE, and the Cochrane Central Register of Controlled Trials (CENTRAL) (2000-2014) and relevant papers were selected by two researchers. After the literature search, 231 papers were identified and the full texts of 138 articles were assessed for eligibility. Of these, 22 were included after removal of papers that did not meet the inclusion criteria. We assessed both the benefits and negative effects of smart pumps from these studies. One of the benefits of using smart pumps was intercepting errors such as the wrong rate, wrong dose, and pump setting errors. Other benefits include reduction of adverse drug event rates, practice improvements, and cost effectiveness. Meanwhile, the current issues or negative effects related to using smart pumps were lower compliance rates of using smart pumps, the overriding of soft alerts, non-intercepted errors, or the possibility of using the wrong drug library. The literature suggests that smart pumps reduce but do not eliminate programming errors. Although the hard limits of a drug library play a main role in intercepting medication errors, soft limits were still not as effective as hard limits because of high override rates. Compliance in using smart pumps is key towards effectively preventing errors. Opportunities for improvement include upgrading drug

  20. Analysis of possible systematic errors in the Oslo method

    International Nuclear Information System (INIS)

    Larsen, A. C.; Guttormsen, M.; Buerger, A.; Goergen, A.; Nyhus, H. T.; Rekstad, J.; Siem, S.; Toft, H. K.; Tveten, G. M.; Wikan, K.; Krticka, M.; Betak, E.; Schiller, A.; Voinov, A. V.

    2011-01-01

    In this work, we have reviewed the Oslo method, which enables the simultaneous extraction of the level density and γ-ray transmission coefficient from a set of particle-γ coincidence data. Possible errors and uncertainties have been investigated. Typical data sets from various mass regions as well as simulated data have been tested against the assumptions behind the data analysis.

  1. Internal quality control of RIA with Tonks error calculation method

    International Nuclear Information System (INIS)

    Chen Xiaodong

    1996-01-01

    According to the methodology feature of RIA, an internal quality control chart with Tonks error calculation method which is suitable for RIA is designed. The quality control chart defines the value of the allowance error with normal reference range. The method has the simplicity of its performance and directly perceived through the senses. Taking the example of determining T 3 and T 4 , the calculation of allowance error, drawing of quality control chart and the analysis of result are introduced

  2. Tolerable systematic errors in Really Large Hadron Collider dipoles

    International Nuclear Information System (INIS)

    Peggs, S.; Dell, F.

    1996-01-01

    Maximum allowable systematic harmonics for arc dipoles in a Really Large Hadron Collider are derived. The possibility of half cell lengths much greater than 100 meters is justified. A convenient analytical model evaluating horizontal tune shifts is developed, and tested against a sample high field collider

  3. Correcting systematic errors in high-sensitivity deuteron polarization measurements

    NARCIS (Netherlands)

    Brantjes, N. P. M.; Dzordzhadze, V.; Gebel, R.; Gonnella, F.; Gray, F. E.; van der Hoek, D. J.; Imig, A.; Kruithof, W. L.; Lazarus, D. M.; Lehrach, A.; Lorentz, B.; Messi, R.; Moricciani, D.; Morse, W. M.; Noid, G. A.; Onderwater, C. J. G.; Ozben, C. S.; Prasuhn, D.; Sandri, P. Levi; Semertzidis, Y. K.; da Silva e Silva, M.; Stephenson, E. J.; Stockhorst, H.; Venanzoni, G.; Versolato, O. O.

    2012-01-01

    This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY

  4. Medication errors in the Middle East countries: a systematic review of the literature.

    Science.gov (United States)

    Alsulami, Zayed; Conroy, Sharon; Choonara, Imti

    2013-04-01

    Medication errors are a significant global concern and can cause serious medical consequences for patients. Little is known about medication errors in Middle Eastern countries. The objectives of this systematic review were to review studies of the incidence and types of medication errors in Middle Eastern countries and to identify the main contributory factors involved. A systematic review of the literature related to medication errors in Middle Eastern countries was conducted in October 2011 using the following databases: Embase, Medline, Pubmed, the British Nursing Index and the Cumulative Index to Nursing & Allied Health Literature. The search strategy included all ages and languages. Inclusion criteria were that the studies assessed or discussed the incidence of medication errors and contributory factors to medication errors during the medication treatment process in adults or in children. Forty-five studies from 10 of the 15 Middle Eastern countries met the inclusion criteria. Nine (20 %) studies focused on medication errors in paediatric patients. Twenty-one focused on prescribing errors, 11 measured administration errors, 12 were interventional studies and one assessed transcribing errors. Dispensing and documentation errors were inadequately evaluated. Error rates varied from 7.1 % to 90.5 % for prescribing and from 9.4 % to 80 % for administration. The most common types of prescribing errors reported were incorrect dose (with an incidence rate from 0.15 % to 34.8 % of prescriptions), wrong frequency and wrong strength. Computerised physician rder entry and clinical pharmacist input were the main interventions evaluated. Poor knowledge of medicines was identified as a contributory factor for errors by both doctors (prescribers) and nurses (when administering drugs). Most studies did not assess the clinical severity of the medication errors. Studies related to medication errors in the Middle Eastern countries were relatively few in number and of poor quality

  5. Error Control for Network-on-Chip Links

    CERN Document Server

    Fu, Bo

    2012-01-01

    As technology scales into nanoscale regime, it is impossible to guarantee the perfect hardware design. Moreover, if the requirement of 100% correctness in hardware can be relaxed, the cost of manufacturing, verification, and testing will be significantly reduced. Many approaches have been proposed to address the reliability problem of on-chip communications. This book focuses on the use of error control codes (ECCs) to improve on-chip interconnect reliability. Coverage includes detailed description of key issues in NOC error control faced by circuit and system designers, as well as practical error control techniques to minimize the impact of these errors on system performance. Provides a detailed background on the state of error control methods for on-chip interconnects; Describes the use of more complex concatenated codes such as Hamming Product Codes with Type-II HARQ, while emphasizing integration techniques for on-chip interconnect links; Examines energy-efficient techniques for integrating multiple error...

  6. Systematic Error Study for ALICE charged-jet v2 Measurement

    Energy Technology Data Exchange (ETDEWEB)

    Heinz, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Soltz, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-07-18

    We study the treatment of systematic errors in the determination of v2 for charged jets in √ sNN = 2:76 TeV Pb-Pb collisions by the ALICE Collaboration. Working with the reported values and errors for the 0-5% centrality data we evaluate the Χ2 according to the formulas given for the statistical and systematic errors, where the latter are separated into correlated and shape contributions. We reproduce both the Χ2 and p-values relative to a null (zero) result. We then re-cast the systematic errors into an equivalent co-variance matrix and obtain identical results, demonstrating that the two methods are equivalent.

  7. Error Control in Distributed Node Self-Localization

    Directory of Open Access Journals (Sweden)

    Ying Zhang

    2008-03-01

    Full Text Available Location information of nodes in an ad hoc sensor network is essential to many tasks such as routing, cooperative sensing, and service delivery. Distributed node self-localization is lightweight and requires little communication overhead, but often suffers from the adverse effects of error propagation. Unlike other localization papers which focus on designing elaborate localization algorithms, this paper takes a different perspective, focusing on the error propagation problem, addressing questions such as where localization error comes from and how it propagates from node to node. To prevent error from propagating and accumulating, we develop an error-control mechanism based on characterization of node uncertainties and discrimination between neighboring nodes. The error-control mechanism uses only local knowledge and is fully decentralized. Simulation results have shown that the active selection strategy significantly mitigates the effect of error propagation for both range and directional sensors. It greatly improves localization accuracy and robustness.

  8. SYSTEMATIC ERROR REDUCTION: NON-TILTED REFERENCE BEAM METHOD FOR LONG TRACE PROFILER

    International Nuclear Information System (INIS)

    QIAN, S.; QIAN, K.; HONG, Y.; SENG, L.; HO, T.; TAKACS, P.

    2007-01-01

    Systematic error in the Long Trace Profiler (LTP) has become the major error source as measurement accuracy enters the nanoradian and nanometer regime. Great efforts have been made to reduce the systematic error at a number of synchrotron radiation laboratories around the world. Generally, the LTP reference beam has to be tilted away from the optical axis in order to avoid fringe overlap between the sample and reference beams. However, a tilted reference beam will result in considerable systematic error due to optical system imperfections, which is difficult to correct. Six methods of implementing a non-tilted reference beam in the LTP are introduced: (1) application of an external precision angle device to measure and remove slide pitch error without a reference beam, (2) independent slide pitch test by use of not tilted reference beam, (3) non-tilted reference test combined with tilted sample, (4) penta-prism scanning mode without a reference beam correction, (5) non-tilted reference using a second optical head, and (6) alternate switching of data acquisition between the sample and reference beams. With a non-tilted reference method, the measurement accuracy can be improved significantly. Some measurement results are presented. Systematic error in the sample beam arm is not addressed in this paper and should be treated separately

  9. Inherent Error in Asynchronous Digital Flight Controls.

    Science.gov (United States)

    1980-02-01

    operation will be eliminated. If T* is close to T, the inherent error (eA) is a small value. Then the deficiency of the basic model, which is de... tK2 at m ~VCONTR0RKIPPIL I~+ R tKT 2 TKT 5 ~I G E 1 i V OTN TRIL 2-REDN HNE IjT e UT 3 ~49__I 4.) 4 -4 - 4.-4 U 4.) k-4E-- Iz E-4 P E-44)-4. 4.)l 1s...indicate the channel failure. To reduce this deficiency , the new model computes a tolerance value equal to the maximum steady-state sample covariance of the

  10. Impact of systematic errors on DVH parameters of different OAR and target volumes in Intracavitary Brachytherapy (ICBT)

    International Nuclear Information System (INIS)

    Mourya, Ankur; Singh, Gaganpreet; Kumar, Vivek; Oinam, Arun S.

    2016-01-01

    Aim of this study is to analyze the impact of systematic errors on DVH parameters of different OAR and Target volumes in intracavitary brachytherapy (ICBT). To quantify the changes in dose-volume histogram parameters due to systematic errors in applicator reconstruction of brachytherapy planning, known errors in catheter reconstructions have to be introduced in applicator coordinate system

  11. A neural fuzzy controller learning by fuzzy error propagation

    Science.gov (United States)

    Nauck, Detlef; Kruse, Rudolf

    1992-01-01

    In this paper, we describe a procedure to integrate techniques for the adaptation of membership functions in a linguistic variable based fuzzy control environment by using neural network learning principles. This is an extension to our work. We solve this problem by defining a fuzzy error that is propagated back through the architecture of our fuzzy controller. According to this fuzzy error and the strength of its antecedent each fuzzy rule determines its amount of error. Depending on the current state of the controlled system and the control action derived from the conclusion, each rule tunes the membership functions of its antecedent and its conclusion. By this we get an unsupervised learning technique that enables a fuzzy controller to adapt to a control task by knowing just about the global state and the fuzzy error.

  12. Design of roundness measurement model with multi-systematic error for cylindrical components with large radius.

    Science.gov (United States)

    Sun, Chuanzhi; Wang, Lei; Tan, Jiubin; Zhao, Bo; Tang, Yangchao

    2016-02-01

    The paper designs a roundness measurement model with multi-systematic error, which takes eccentricity, probe offset, radius of tip head of probe, and tilt error into account for roundness measurement of cylindrical components. The effects of the systematic errors and radius of components are analysed in the roundness measurement. The proposed method is built on the instrument with a high precision rotating spindle. The effectiveness of the proposed method is verified by experiment with the standard cylindrical component, which is measured on a roundness measuring machine. Compared to the traditional limacon measurement model, the accuracy of roundness measurement can be increased by about 2.2 μm using the proposed roundness measurement model for the object with a large radius of around 37 mm. The proposed method can improve the accuracy of roundness measurement and can be used for error separation, calibration, and comparison, especially for cylindrical components with a large radius.

  13. Effects of averaging over motion and the resulting systematic errors in radiation therapy

    International Nuclear Information System (INIS)

    Evans, Philip M; Coolens, Catherine; Nioutsikou, Elena

    2006-01-01

    The potential for systematic errors in radiotherapy of a breathing patient is considered using the statistical model of Bortfeld et al (2002 Phys. Med. Biol. 47 2203-20). It is shown that although averaging over 30 fractions does result in a narrow Gaussian distribution of errors, as predicted by the central limit theorem, the fact that one or a few samples of the breathing patient's motion distribution are used for treatment planning (in contrast to the many treatment fractions that are likely to be delivered) may result in a much larger error with a systematic component. The error distribution may be particularly large if a scan at breath-hold is used for planning. (note)

  14. Complete Systematic Error Model of SSR for Sensor Registration in ATC Surveillance Networks.

    Science.gov (United States)

    Jarama, Ángel J; López-Araquistain, Jaime; Miguel, Gonzalo de; Besada, Juan A

    2017-09-21

    In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases) is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature). It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation.

  15. Complete Systematic Error Model of SSR for Sensor Registration in ATC Surveillance Networks

    Directory of Open Access Journals (Sweden)

    Ángel J. Jarama

    2017-09-01

    Full Text Available In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature. It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation.

  16. Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation

    International Nuclear Information System (INIS)

    Helgesson, P.; Sjöstrand, H.; Koning, A.J.; Rydén, J.; Rochman, D.; Alhassan, E.; Pomp, S.

    2016-01-01

    In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also

  17. Unaccounted source of systematic errors in measurements of the Newtonian gravitational constant G

    Science.gov (United States)

    DeSalvo, Riccardo

    2015-06-01

    Many precision measurements of G have produced a spread of results incompatible with measurement errors. Clearly an unknown source of systematic errors is at work. It is proposed here that most of the discrepancies derive from subtle deviations from Hooke's law, caused by avalanches of entangled dislocations. The idea is supported by deviations from linearity reported by experimenters measuring G, similarly to what is observed, on a larger scale, in low-frequency spring oscillators. Some mitigating experimental apparatus modifications are suggested.

  18. Systematic errors of EIT systems determined by easily-scalable resistive phantoms.

    Science.gov (United States)

    Hahn, G; Just, A; Dittmar, J; Hellige, G

    2008-06-01

    We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design.

  19. Systematic errors of EIT systems determined by easily-scalable resistive phantoms

    International Nuclear Information System (INIS)

    Hahn, G; Just, A; Dittmar, J; Hellige, G

    2008-01-01

    We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design

  20. ERESYE - a expert system for the evaluation of uncertainties related to systematic experimental errors

    International Nuclear Information System (INIS)

    Martinelli, T.; Panini, G.C.; Amoroso, A.

    1989-11-01

    Information about systematic errors are not given In EXFOR, the data base of nuclear experimental measurements: their assessment is committed to the ability of the evaluator. A tool Is needed which performs this task in a fully automatic way or, at least, gives a valuable aid. The expert system ERESYE has been implemented for investigating the feasibility of an automatic evaluation of the systematic errors in the experiments. The features of the project which led to the implementation of the system are presented. (author)

  1. Reduced phase error through optimized control of a superconducting qubit

    International Nuclear Information System (INIS)

    Lucero, Erik; Kelly, Julian; Bialczak, Radoslaw C.; Lenander, Mike; Mariantoni, Matteo; Neeley, Matthew; O'Connell, A. D.; Sank, Daniel; Wang, H.; Weides, Martin; Wenner, James; Cleland, A. N.; Martinis, John M.; Yamamoto, Tsuyoshi

    2010-01-01

    Minimizing phase and other errors in experimental quantum gates allows higher fidelity quantum processing. To quantify and correct for phase errors, in particular, we have developed an experimental metrology - amplified phase error (APE) pulses - that amplifies and helps identify phase errors in general multilevel qubit architectures. In order to correct for both phase and amplitude errors specific to virtual transitions and leakage outside of the qubit manifold, we implement 'half derivative', an experimental simplification of derivative reduction by adiabatic gate (DRAG) control theory. The phase errors are lowered by about a factor of five using this method to ∼1.6 deg. per gate, and can be tuned to zero. Leakage outside the qubit manifold, to the qubit |2> state, is also reduced to ∼10 -4 for 20% faster gates.

  2. Composite Gauss-Legendre Quadrature with Error Control

    Science.gov (United States)

    Prentice, J. S. C.

    2011-01-01

    We describe composite Gauss-Legendre quadrature for determining definite integrals, including a means of controlling the approximation error. We compare the form and performance of the algorithm with standard Newton-Cotes quadrature. (Contains 1 table.)

  3. Data mining of air traffic control operational errors

    Science.gov (United States)

    2006-01-01

    In this paper we present the results of : applying data mining techniques to identify patterns and : anomalies in air traffic control operational errors (OEs). : Reducing the OE rate is of high importance and remains a : challenge in the aviation saf...

  4. Effects of systematic phase errors on optimized quantum random-walk search algorithm

    International Nuclear Information System (INIS)

    Zhang Yu-Chao; Bao Wan-Su; Wang Xiang; Fu Xiang-Qun

    2015-01-01

    This study investigates the effects of systematic errors in phase inversions on the success rate and number of iterations in the optimized quantum random-walk search algorithm. Using the geometric description of this algorithm, a model of the algorithm with phase errors is established, and the relationship between the success rate of the algorithm, the database size, the number of iterations, and the phase error is determined. For a given database size, we obtain both the maximum success rate of the algorithm and the required number of iterations when phase errors are present in the algorithm. Analyses and numerical simulations show that the optimized quantum random-walk search algorithm is more robust against phase errors than Grover’s algorithm. (paper)

  5. Electronic portal image assisted reduction of systematic set-up errors in head and neck irradiation

    International Nuclear Information System (INIS)

    Boer, Hans C.J. de; Soernsen de Koste, John R. van; Creutzberg, Carien L.; Visser, Andries G.; Levendag, Peter C.; Heijmen, Ben J.M.

    2001-01-01

    Purpose: To quantify systematic and random patient set-up errors in head and neck irradiation and to investigate the impact of an off-line correction protocol on the systematic errors. Material and methods: Electronic portal images were obtained for 31 patients treated for primary supra-glottic larynx carcinoma who were immobilised using a polyvinyl chloride cast. The observed patient set-up errors were input to the shrinking action level (SAL) off-line decision protocol and appropriate set-up corrections were applied. To assess the impact of the protocol, the positioning accuracy without application of set-up corrections was reconstructed. Results: The set-up errors obtained without set-up corrections (1 standard deviation (SD)=1.5-2 mm for random and systematic errors) were comparable to those reported in other studies on similar fixation devices. On an average, six fractions per patient were imaged and the set-up of half the patients was changed due to the decision protocol. Most changes were detected during weekly check measurements, not during the first days of treatment. The application of the SAL protocol reduced the width of the distribution of systematic errors to 1 mm (1 SD), as expected from simulations. A retrospective analysis showed that this accuracy should be attainable with only two measurements per patient using a different off-line correction protocol, which does not apply action levels. Conclusions: Off-line verification protocols can be particularly effective in head and neck patients due to the smallness of the random set-up errors. The excellent set-up reproducibility that can be achieved with such protocols enables accurate dose delivery in conformal treatments

  6. Joint position sense error in people with neck pain: A systematic review.

    Science.gov (United States)

    de Vries, J; Ischebeck, B K; Voogt, L P; van der Geest, J N; Janssen, M; Frens, M A; Kleinrensink, G J

    2015-12-01

    Several studies in recent decades have examined the relationship between proprioceptive deficits and neck pain. However, there is no uniform conclusion on the relationship between the two. Clinically, proprioception is evaluated using the Joint Position Sense Error (JPSE), which reflects a person's ability to accurately return his head to a predefined target after a cervical movement. We focused to differentiate between JPSE in people with neck pain compared to healthy controls. Systematic review according to the PRISMA guidelines. Our data sources were Embase, Medline OvidSP, Web of Science, Cochrane Central, CINAHL and Pubmed Publisher. To be included, studies had to compare JPSE of the neck (O) in people with neck pain (P) with JPSE of the neck in healthy controls (C). Fourteen studies were included. Four studies reported that participants with traumatic neck pain had a significantly higher JPSE than healthy controls. Of the eight studies involving people with non-traumatic neck pain, four reported significant differences between the groups. The JPSE did not vary between neck-pain groups. Current literature shows the JPSE to be a relevant measure when it is used correctly. All studies which calculated the JPSE over at least six trials showed a significantly increased JPSE in the neck pain group. This strongly suggests that 'number of repetitions' is a major element in correctly performing the JPSE test. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. The systematic error of temperature noise correlation measurement method and self-calibration

    International Nuclear Information System (INIS)

    Tian Hong; Tong Yunxian

    1993-04-01

    The turbulent transport behavior of fluid noise and the nature of noise affect on the velocity measurement system have been studied. The systematic error of velocity measurement system is analyzed. A theoretical calibration method is proposed, which makes the velocity measurement of time-correlation as an absolute measurement method. The theoretical results are in good agreement with experiments

  8. End-point construction and systematic titration error in linear titration curves-complexation reactions

    NARCIS (Netherlands)

    Coenegracht, P.M.J.; Duisenberg, A.J.M.

    The systematic titration error which is introduced by the intersection of tangents to hyperbolic titration curves is discussed. The effects of the apparent (conditional) formation constant, of the concentration of the unknown component and of the ranges used for the end-point construction are

  9. On the effect of systematic errors in near real time accountancy

    International Nuclear Information System (INIS)

    Avenhaus, R.

    1987-01-01

    Systematic measurement errors have a decisive impact on nuclear materials accountancy. This has been demonstrated at various occasions for a fixed number of inventory periods, i.e. for situations where the overall probability of detection is taken as the measure of effectiveness. In the framework of Near Real Time Accountancy (NRTA), however, such analyses have not yet been performed. In this paper sequential test procedures are considered which are based on the so-called MUF-Residuals. It is shown that, if the decision maker does not know the systematic error variance, the average run lengths tend towards infinity if this variance is equal or longer than that of the random error. Furthermore, if the decision maker knows this invariance, the average run length for constant loss or diversion is not shorter than that without loss or diversion. These results cast some doubt on the present practice of data evaluation where systematic errors are tacitly assumed to persist for an infinite time. In fact, information about the time dependence of the variances of these errors has to be gathered in order that the efficiency of NRTA evaluation methods can be estimated realistically

  10. Nature versus nurture: A systematic approach to elucidate gene-environment interactions in the development of myopic refractive errors.

    Science.gov (United States)

    Miraldi Utz, Virginia

    2017-01-01

    Myopia is the most common eye disorder and major cause of visual impairment worldwide. As the incidence of myopia continues to rise, the need to further understand the complex roles of molecular and environmental factors controlling variation in refractive error is of increasing importance. Tkatchenko and colleagues applied a systematic approach using a combination of gene set enrichment analysis, genome-wide association studies, and functional analysis of a murine model to identify a myopia susceptibility gene, APLP2. Differential expression of refractive error was associated with time spent reading for those with low frequency variants in this gene. This provides support for the longstanding hypothesis of gene-environment interactions in refractive error development.

  11. Random and systematic beam modulator errors in dynamic intensity modulated radiotherapy

    International Nuclear Information System (INIS)

    Parsai, Homayon; Cho, Paul S; Phillips, Mark H; Giansiracusa, Robert S; Axen, David

    2003-01-01

    This paper reports on the dosimetric effects of random and systematic modulator errors in delivery of dynamic intensity modulated beams. A sliding-widow type delivery that utilizes a combination of multileaf collimators (MLCs) and backup diaphragms was examined. Gaussian functions with standard deviations ranging from 0.5 to 1.5 mm were used to simulate random positioning errors. A clinical example involving a clival meningioma was chosen with optic chiasm and brain stem as limiting critical structures in the vicinity of the tumour. Dose calculations for different modulator fluctuations were performed, and a quantitative analysis was carried out based on cumulative and differential dose volume histograms for the gross target volume and surrounding critical structures. The study indicated that random modulator errors have a strong tendency to reduce minimum target dose and homogeneity. Furthermore, it was shown that random perturbation of both MLCs and backup diaphragms in the order of σ = 1 mm can lead to 5% errors in prescribed dose. In comparison, when MLCs or backup diaphragms alone was perturbed, the system was more robust and modulator errors of at least σ = 1.5 mm were required to cause dose discrepancies greater than 5%. For systematic perturbation, even errors in the order of ±0.5 mm were shown to result in significant dosimetric deviations

  12. Controlling qubit drift by recycling error correction syndromes

    Science.gov (United States)

    Blume-Kohout, Robin

    2015-03-01

    Physical qubits are susceptible to systematic drift, above and beyond the stochastic Markovian noise that motivates quantum error correction. This parameter drift must be compensated - if it is ignored, error rates will rise to intolerable levels - but compensation requires knowing the parameters' current value, which appears to require halting experimental work to recalibrate (e.g. via quantum tomography). Fortunately, this is untrue. I show how to perform on-the-fly recalibration on the physical qubits in an error correcting code, using only information from the error correction syndromes. The algorithm for detecting and compensating drift is very simple - yet, remarkably, when used to compensate Brownian drift in the qubit Hamiltonian, it achieves a stabilized error rate very close to the theoretical lower bound. Against 1/f noise, it is less effective only because 1/f noise is (like white noise) dominated by high-frequency fluctuations that are uncompensatable. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE

  13. Variation across mitochondrial gene trees provides evidence for systematic error: How much gene tree variation is biological?

    Science.gov (United States)

    Richards, Emilie J; Brown, Jeremy M; Barley, Anthony J; Chong, Rebecca A; Thomson, Robert C

    2018-02-19

    The use of large genomic datasets in phylogenetics has highlighted extensive topological variation across genes. Much of this discordance is assumed to result from biological processes. However, variation among gene trees can also be a consequence of systematic error driven by poor model fit, and the relative importance of biological versus methodological factors in explaining gene tree variation is a major unresolved question. Using mitochondrial genomes to control for biological causes of gene tree variation, we estimate the extent of gene tree discordance driven by systematic error and employ posterior prediction to highlight the role of model fit in producing this discordance. We find that the amount of discordance among mitochondrial gene trees is similar to the amount of discordance found in other studies that assume only biological causes of variation. This similarity suggests that the role of systematic error in generating gene tree variation is underappreciated and critical evaluation of fit between assumed models and the data used for inference is important for the resolution of unresolved phylogenetic questions.

  14. EMG Versus Torque Control of Human-Machine Systems: Equalizing Control Signal Variability Does not Equalize Error or Uncertainty.

    Science.gov (United States)

    Johnson, Reva E; Kording, Konrad P; Hargrove, Levi J; Sensinger, Jonathon W

    2017-06-01

    In this paper we asked the question: if we artificially raise the variability of torque control signals to match that of EMG, do subjects make similar errors and have similar uncertainty about their movements? We answered this question using two experiments in which subjects used three different control signals: torque, torque+noise, and EMG. First, we measured error on a simple target-hitting task in which subjects received visual feedback only at the end of their movements. We found that even when the signal-to-noise ratio was equal across EMG and torque+noise control signals, EMG resulted in larger errors. Second, we quantified uncertainty by measuring the just-noticeable difference of a visual perturbation. We found that for equal errors, EMG resulted in higher movement uncertainty than both torque and torque+noise. The differences suggest that performance and confidence are influenced by more than just the noisiness of the control signal, and suggest that other factors, such as the user's ability to incorporate feedback and develop accurate internal models, also have significant impacts on the performance and confidence of a person's actions. We theorize that users have difficulty distinguishing between random and systematic errors for EMG control, and future work should examine in more detail the types of errors made with EMG control.

  15. When soft controls get slippery: User interfaces and human error

    International Nuclear Information System (INIS)

    Stubler, W.F.; O'Hara, J.M.

    1998-01-01

    Many types of products and systems that have traditionally featured physical control devices are now being designed with soft controls--input formats appearing on computer-based display devices and operated by a variety of input devices. A review of complex human-machine systems found that soft controls are particularly prone to some types of errors and may affect overall system performance and safety. This paper discusses the application of design approaches for reducing the likelihood of these errors and for enhancing usability, user satisfaction, and system performance and safety

  16. Systematic error in the precision measurement of the mean wavelength of a nearly monochromatic neutron beam due to geometric errors

    Energy Technology Data Exchange (ETDEWEB)

    Coakley, K.J., E-mail: kevin.coakley@nist.go [National Institute of Standards and Technology, 325 Broadway, Boulder, CO 80305 (United States); Dewey, M.S. [National Institute of Standards and Technology, Gaithersburg, MD (United States); Yue, A.T. [University of Tennessee, Knoxville, TN (United States); Laptev, A.B. [Tulane University, New Orleans, LA (United States)

    2009-12-11

    Many experiments at neutron scattering facilities require nearly monochromatic neutron beams. In such experiments, one must accurately measure the mean wavelength of the beam. We seek to reduce the systematic uncertainty of this measurement to approximately 0.1%. This work is motivated mainly by an effort to improve the measurement of the neutron lifetime determined from data collected in a 2003 in-beam experiment performed at NIST. More specifically, we seek to reduce systematic uncertainty by calibrating the neutron detector used in this lifetime experiment. This calibration requires simultaneous measurement of the responses of both the neutron detector used in the lifetime experiment and an absolute black neutron detector to a highly collimated nearly monochromatic beam of cold neutrons, as well as a separate measurement of the mean wavelength of the neutron beam. The calibration uncertainty will depend on the uncertainty of the measured efficiency of the black neutron detector and the uncertainty of the measured mean wavelength. The mean wavelength of the beam is measured by Bragg diffracting the beam from a nearly perfect silicon analyzer crystal. Given the rocking curve data and knowledge of the directions of the rocking axis and the normal to the scattering planes in the silicon crystal, one determines the mean wavelength of the beam. In practice, the direction of the rocking axis and the normal to the silicon scattering planes are not known exactly. Based on Monte Carlo simulation studies, we quantify systematic uncertainties in the mean wavelength measurement due to these geometric errors. Both theoretical and empirical results are presented and compared.

  17. Characterization of electromagnetic fields in the aSPECT spectrometer and reduction of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Ayala Guardia, Fidel

    2011-10-15

    The aSPECT spectrometer has been designed to measure, with high precision, the recoil proton spectrum of the free neutron decay. From this spectrum, the electron antineutrino angular correlation coefficient a can be extracted with high accuracy. The goal of the experiment is to determine the coefficient a with a total relative error smaller than 0.3%, well below the current literature value of 5%. First measurements with the aSPECT spectrometer were performed in the Forschungs-Neutronenquelle Heinz Maier-Leibnitz in Munich. However, time-dependent background instabilities prevented us from reporting a new value of a. The contents of this thesis are based on the latest measurements performed with the aSPECT spectrometer at the Institut Laue-Langevin (ILL) in Grenoble, France. In these measurements, background instabilities were considerably reduced. Furthermore, diverse modifications intended to minimize systematic errors and to achieve a more reliable setup were successfully performed. Unfortunately, saturation effects of the detector electronics turned out to be too high to determine a meaningful result. However, this and other systematics were identified and decreased, or even eliminated, for future aSPECT beamtimes. The central part of this work is focused on the analysis and improvement of systematic errors related to the aSPECT electromagnetic fields. This work yielded in many improvements, particularly in the reduction of the systematic effects due to electric fields. The systematics related to the aSPECT magnetic field were also minimized and determined down to a level which permits to improve the present literature value of a. Furthermore, a custom NMR-magnetometer was developed and improved during this thesis, which will lead to reduction of magnetic field-related uncertainties down to a negligible level to determine a with a total relative error of at least 0.3%.

  18. Characterization of electromagnetic fields in the αSPECTspectrometer and reduction of systematic errors

    International Nuclear Information System (INIS)

    Ayala Guardia, Fidel

    2011-10-01

    The aSPECT spectrometer has been designed to measure, with high precision, the recoil proton spectrum of the free neutron decay. From this spectrum, the electron antineutrino angular correlation coefficient a can be extracted with high accuracy. The goal of the experiment is to determine the coefficient a with a total relative error smaller than 0.3%, well below the current literature value of 5%. First measurements with the aSPECT spectrometer were performed in the Forschungs-Neutronenquelle Heinz Maier-Leibnitz in Munich. However, time-dependent background instabilities prevented us from reporting a new value of a. The contents of this thesis are based on the latest measurements performed with the aSPECT spectrometer at the Institut Laue-Langevin (ILL) in Grenoble, France. In these measurements, background instabilities were considerably reduced. Furthermore, diverse modifications intended to minimize systematic errors and to achieve a more reliable setup were successfully performed. Unfortunately, saturation effects of the detector electronics turned out to be too high to determine a meaningful result. However, this and other systematics were identified and decreased, or even eliminated, for future aSPECT beamtimes. The central part of this work is focused on the analysis and improvement of systematic errors related to the aSPECT electromagnetic fields. This work yielded in many improvements, particularly in the reduction of the systematic effects due to electric fields. The systematics related to the aSPECT magnetic field were also minimized and determined down to a level which permits to improve the present literature value of a. Furthermore, a custom NMR-magnetometer was developed and improved during this thesis, which will lead to reduction of magnetic field-related uncertainties down to a negligible level to determine a with a total relative error of at least 0.3%.

  19. GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology

    Energy Technology Data Exchange (ETDEWEB)

    Mandelbaum, R.; Rowe, B.; Armstrong, R.; Bard, D.; Bertin, E.; Bosch, J.; Boutigny, D.; Courbin, F.; Dawson, W. A.; Donnarumma, A.; Fenech Conti, I.; Gavazzi, R.; Gentile, M.; Gill, M. S. S.; Hogg, D. W.; Huff, E. M.; Jee, M. J.; Kacprzak, T.; Kilbinger, M.; Kuntzer, T.; Lang, D.; Luo, W.; March, M. C.; Marshall, P. J.; Meyers, J. E.; Miller, L.; Miyatake, H.; Nakajima, R.; Ngole Mboula, F. M.; Nurbaeva, G.; Okura, Y.; Paulin-Henriksson, S.; Rhodes, J.; Schneider, M. D.; Shan, H.; Sheldon, E. S.; Simet, M.; Starck, J. -L.; Sureau, F.; Tewes, M.; Zarb Adami, K.; Zhang, J.; Zuntz, J.

    2015-05-01

    We present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods’ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.

  20. Servo control booster system for minimizing following error

    Science.gov (United States)

    Wise, W.L.

    1979-07-26

    A closed-loop feedback-controlled servo system is disclosed which reduces command-to-response error to the system's position feedback resolution least increment, ..delta..S/sub R/, on a continuous real-time basis, for all operational times of consequence and for all operating speeds. The servo system employs a second position feedback control loop on a by exception basis, when the command-to-response error greater than or equal to ..delta..S/sub R/, to produce precise position correction signals. When the command-to-response error is less than ..delta..S/sub R/, control automatically reverts to conventional control means as the second position feedback control loop is disconnected, becoming transparent to conventional servo control means. By operating the second unique position feedback control loop used herein at the appropriate clocking rate, command-to-response error may be reduced to the position feedback resolution least increment. The present system may be utilized in combination with a tachometer loop for increased stability.

  1. IPTV multicast with peer-assisted lossy error control

    Science.gov (United States)

    Li, Zhi; Zhu, Xiaoqing; Begen, Ali C.; Girod, Bernd

    2010-07-01

    Emerging IPTV technology uses source-specific IP multicast to deliver television programs to end-users. To provide reliable IPTV services over the error-prone DSL access networks, a combination of multicast forward error correction (FEC) and unicast retransmissions is employed to mitigate the impulse noises in DSL links. In existing systems, the retransmission function is provided by the Retransmission Servers sitting at the edge of the core network. In this work, we propose an alternative distributed solution where the burden of packet loss repair is partially shifted to the peer IP set-top boxes. Through Peer-Assisted Repair (PAR) protocol, we demonstrate how the packet repairs can be delivered in a timely, reliable and decentralized manner using the combination of server-peer coordination and redundancy of repairs. We also show that this distributed protocol can be seamlessly integrated with an application-layer source-aware error protection mechanism called forward and retransmitted Systematic Lossy Error Protection (SLEP/SLEPr). Simulations show that this joint PARSLEP/ SLEPr framework not only effectively mitigates the bottleneck experienced by the Retransmission Servers, thus greatly enhancing the scalability of the system, but also efficiently improves the resistance to the impulse noise.

  2. Experimental Evaluation of a Mixed Controller That Amplifies Spatial Errors and Reduces Timing Errors

    Directory of Open Access Journals (Sweden)

    Laura Marchal-Crespo

    2017-06-01

    Full Text Available Research on motor learning suggests that training with haptic guidance enhances learning of the timing components of motor tasks, whereas error amplification is better for learning the spatial components. We present a novel mixed guidance controller that combines haptic guidance and error amplification to simultaneously promote learning of the timing and spatial components of complex motor tasks. The controller is realized using a force field around the desired position. This force field has a stable manifold tangential to the trajectory that guides subjects in velocity-related aspects. The force field has an unstable manifold perpendicular to the trajectory, which amplifies the perpendicular (spatial error. We also designed a controller that applies randomly varying, unpredictable disturbing forces to enhance the subjects’ active participation by pushing them away from their “comfort zone.” We conducted an experiment with thirty-two healthy subjects to evaluate the impact of four different training strategies on motor skill learning and self-reported motivation: (i No haptics, (ii mixed guidance, (iii perpendicular error amplification and tangential haptic guidance provided in sequential order, and (iv randomly varying disturbing forces. Subjects trained two motor tasks using ARMin IV, a robotic exoskeleton for upper limb rehabilitation: follow circles with an ellipsoidal speed profile, and move along a 3D line following a complex speed profile. Mixed guidance showed no detectable learning advantages over the other groups. Results suggest that the effectiveness of the training strategies depends on the subjects’ initial skill level. Mixed guidance seemed to benefit subjects who performed the circle task with smaller errors during baseline (i.e., initially more skilled subjects, while training with no haptics was more beneficial for subjects who created larger errors (i.e., less skilled subjects. Therefore, perhaps the high functional

  3. Insights on the impact of systematic model errors on data assimilation performance in changing catchments

    Science.gov (United States)

    Pathiraja, S.; Anghileri, D.; Burlando, P.; Sharma, A.; Marshall, L.; Moradkhani, H.

    2018-03-01

    The global prevalence of rapid and extensive land use change necessitates hydrologic modelling methodologies capable of handling non-stationarity. This is particularly true in the context of Hydrologic Forecasting using Data Assimilation. Data Assimilation has been shown to dramatically improve forecast skill in hydrologic and meteorological applications, although such improvements are conditional on using bias-free observations and model simulations. A hydrologic model calibrated to a particular set of land cover conditions has the potential to produce biased simulations when the catchment is disturbed. This paper sheds new light on the impacts of bias or systematic errors in hydrologic data assimilation, in the context of forecasting in catchments with changing land surface conditions and a model calibrated to pre-change conditions. We posit that in such cases, the impact of systematic model errors on assimilation or forecast quality is dependent on the inherent prediction uncertainty that persists even in pre-change conditions. Through experiments on a range of catchments, we develop a conceptual relationship between total prediction uncertainty and the impacts of land cover changes on the hydrologic regime to demonstrate how forecast quality is affected when using state estimation Data Assimilation with no modifications to account for land cover changes. This work shows that systematic model errors as a result of changing or changed catchment conditions do not always necessitate adjustments to the modelling or assimilation methodology, for instance through re-calibration of the hydrologic model, time varying model parameters or revised offline/online bias estimation.

  4. Investigating Systematic Errors of the Interstellar Flow Longitude Derived from the Pickup Ion Cutoff

    Science.gov (United States)

    Taut, A.; Berger, L.; Drews, C.; Bower, J.; Keilbach, D.; Lee, M. A.; Moebius, E.; Wimmer-Schweingruber, R. F.

    2017-12-01

    Complementary to the direct neutral particle measurements performed by e.g. IBEX, the measurement of PickUp Ions (PUIs) constitutes a diagnostic tool to investigate the local interstellar medium. PUIs are former neutral particles that have been ionized in the inner heliosphere. Subsequently, they are picked up by the solar wind and its frozen-in magnetic field. Due to this process, a characteristic Velocity Distribution Function (VDF) with a sharp cutoff evolves, which carries information about the PUI's injection speed and thus the former neutral particle velocity. The symmetry of the injection speed about the interstellar flow vector is used to derive the interstellar flow longitude from PUI measurements. Using He PUI data obtained by the PLASTIC sensor on STEREO A, we investigate how this concept may be affected by systematic errors. The PUI VDF strongly depends on the orientation of the local interplanetary magnetic field. Recently injected PUIs with speeds just below the cutoff speed typically form a highly anisotropic torus distribution in velocity space, which leads to a longitudinal transport for certain magnetic field orientation. Therefore, we investigate how the selection of magnetic field configurations in the data affects the result for the interstellar flow longitude that we derive from the PUI cutoff. Indeed, we find that the results follow a systematic trend with the filtered magnetic field angles that can lead to a shift of the result up to 5°. In turn, this means that every value for the interstellar flow longitude derived from the PUI cutoff is affected by a systematic error depending on the utilized magnetic field orientations. Here, we present our observations, discuss possible reasons for the systematic trend we discovered, and indicate selections that may minimize the systematic errors.

  5. An empirical study on the basic human error probabilities for NPP advanced main control room operation using soft control

    International Nuclear Information System (INIS)

    Jang, Inseok; Kim, Ar Ryum; Harbi, Mohamed Ali Salem Al; Lee, Seung Jun; Kang, Hyun Gook; Seong, Poong Hyun

    2013-01-01

    MCRs. In this paper, BHEPs in a soft control operation environment are investigated empirically for BHEPs to apply advanced MCRs. A soft control operation environment is constructed by using a compact nuclear simulator (CNS), which is a mockup for advanced MCRs. Before the experiments, all tasks that should be performed by subjects are analyzed using one of the task analysis methods, Systematic Human Error Reduction and Prediction Approach (SHERPA). Human errors are then checked to analyze BHEPs, human error mode, and the cause of human error when using soft control

  6. On global error estimation and control for initial value problems

    NARCIS (Netherlands)

    J. Lang (Jens); J.G. Verwer (Jan)

    2007-01-01

    textabstractThis paper addresses global error estimation and control for initial value problems for ordinary differential equations. The focus lies on a comparison between a novel approach based onthe adjoint method combined with a small sample statistical initialization and the classical approach

  7. On global error estimation and control for initial value problems

    NARCIS (Netherlands)

    Lang, J.; Verwer, J.G.

    2007-01-01

    Abstract. This paper addresses global error estimation and control for initial value problems for ordinary differential equations. The focus lies on a comparison between a novel approach based on the adjoint method combined with a small sample statistical initialization and the classical approach

  8. A new controller for the JET error field correction coils

    International Nuclear Information System (INIS)

    Zanotto, L.; Sartori, F.; Bigi, M.; Piccolo, F.; De Benedetti, M.

    2005-01-01

    This paper describes the hardware and the software structure of a new controller for the JET error field correction coils (EFCC) system, a set of ex-vessel coils that recently replaced the internal saddle coils. The EFCC controller has been developed on a conventional VME hardware platform using a new software framework, recently designed for real-time applications at JET, and replaces the old disruption feedback controller increasing the flexibility and the optimization of the system. The use of conventional hardware has required a particular effort in designing the software part in order to meet the specifications. The peculiarities of the new controller will be highlighted, such as its very useful trigger logic interface, which allows in principle exploring various error field experiment scenarios

  9. Differing Air Traffic Controller Responses to Similar Trajectory Prediction Errors

    Science.gov (United States)

    Mercer, Joey; Hunt-Espinosa, Sarah; Bienert, Nancy; Laraway, Sean

    2016-01-01

    A Human-In-The-Loop simulation was conducted in January of 2013 in the Airspace Operations Laboratory at NASA's Ames Research Center. The simulation airspace included two en route sectors feeding the northwest corner of Atlanta's Terminal Radar Approach Control. The focus of this paper is on how uncertainties in the study's trajectory predictions impacted the controllers ability to perform their duties. Of particular interest is how the controllers interacted with the delay information displayed in the meter list and data block while managing the arrival flows. Due to wind forecasts with 30-knot over-predictions and 30-knot under-predictions, delay value computations included errors of similar magnitude, albeit in opposite directions. However, when performing their duties in the presence of these errors, did the controllers issue clearances of similar magnitude, albeit in opposite directions?

  10. Causes of medication administration errors in hospitals: a systematic review of quantitative and qualitative evidence.

    Science.gov (United States)

    Keers, Richard N; Williams, Steven D; Cooke, Jonathan; Ashcroft, Darren M

    2013-11-01

    Underlying systems factors have been seen to be crucial contributors to the occurrence of medication errors. By understanding the causes of these errors, the most appropriate interventions can be designed and implemented to minimise their occurrence. This study aimed to systematically review and appraise empirical evidence relating to the causes of medication administration errors (MAEs) in hospital settings. Nine electronic databases (MEDLINE, EMBASE, International Pharmaceutical Abstracts, ASSIA, PsycINFO, British Nursing Index, CINAHL, Health Management Information Consortium and Social Science Citations Index) were searched between 1985 and May 2013. Inclusion and exclusion criteria were applied to identify eligible publications through title analysis followed by abstract and then full text examination. English language publications reporting empirical data on causes of MAEs were included. Reference lists of included articles and relevant review papers were hand searched for additional studies. Studies were excluded if they did not report data on specific MAEs, used accounts from individuals not directly involved in the MAE concerned or were presented as conference abstracts with insufficient detail. A total of 54 unique studies were included. Causes of MAEs were categorised according to Reason's model of accident causation. Studies were assessed to determine relevance to the research question and how likely the results were to reflect the potential underlying causes of MAEs based on the method(s) used. Slips and lapses were the most commonly reported unsafe acts, followed by knowledge-based mistakes and deliberate violations. Error-provoking conditions influencing administration errors included inadequate written communication (prescriptions, documentation, transcription), problems with medicines supply and storage (pharmacy dispensing errors and ward stock management), high perceived workload, problems with ward-based equipment (access, functionality

  11. ERROR DETECTION BY ANTICIPATION FOR VISION-BASED CONTROL

    Directory of Open Access Journals (Sweden)

    A ZAATRI

    2001-06-01

    Full Text Available A vision-based control system has been developed.  It enables a human operator to remotely direct a robot, equipped with a camera, towards targets in 3D space by simply pointing on their images with a pointing device. This paper presents an anticipatory system, which has been designed for improving the safety and the effectiveness of the vision-based commands. It simulates these commands in a virtual environment. It attempts to detect hard contacts that may occur between the robot and its environment, which can be caused by machine errors or operator errors as well.

  12. The sensitivity of patient specific IMRT QC to systematic MLC leaf bank offset errors

    International Nuclear Information System (INIS)

    Rangel, Alejandra; Palte, Gesa; Dunscombe, Peter

    2010-01-01

    Purpose: Patient specific IMRT QC is performed routinely in many clinics as a safeguard against errors and inaccuracies which may be introduced during the complex planning, data transfer, and delivery phases of this type of treatment. The purpose of this work is to evaluate the feasibility of detecting systematic errors in MLC leaf bank position with patient specific checks. Methods: 9 head and neck (H and N) and 14 prostate IMRT beams were delivered using MLC files containing systematic offsets (±1 mm in two banks, ±0.5 mm in two banks, and 1 mm in one bank of leaves). The beams were measured using both MAPCHECK (Sun Nuclear Corp., Melbourne, FL) and the aS1000 electronic portal imaging device (Varian Medical Systems, Palo Alto, CA). Comparisons with calculated fields, without offsets, were made using commonly adopted criteria including absolute dose (AD) difference, relative dose difference, distance to agreement (DTA), and the gamma index. Results: The criteria most sensitive to systematic leaf bank offsets were the 3% AD, 3 mm DTA for MAPCHECK and the gamma index with 2% AD and 2 mm DTA for the EPID. The criterion based on the relative dose measurements was the least sensitive to MLC offsets. More highly modulated fields, i.e., H and N, showed greater changes in the percentage of passing points due to systematic MLC inaccuracy than prostate fields. Conclusions: None of the techniques or criteria tested is sufficiently sensitive, with the population of IMRT fields, to detect a systematic MLC offset at a clinically significant level on an individual field. Patient specific QC cannot, therefore, substitute for routine QC of the MLC itself.

  13. The sensitivity of patient specific IMRT QC to systematic MLC leaf bank offset errors

    Energy Technology Data Exchange (ETDEWEB)

    Rangel, Alejandra; Palte, Gesa; Dunscombe, Peter [Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2, Canada and Department of Physics and Astronomy, University of Calgary, 2500 University Drive North West, Calgary, Alberta T2N 1N4 (Canada); Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada); Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada); Department of Physics and Astronomy, University of Calgary, 2500 University Drive NW, Calgary, Alberta T2N 1N4 (Canada) and Department of Oncology, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada)

    2010-07-15

    Purpose: Patient specific IMRT QC is performed routinely in many clinics as a safeguard against errors and inaccuracies which may be introduced during the complex planning, data transfer, and delivery phases of this type of treatment. The purpose of this work is to evaluate the feasibility of detecting systematic errors in MLC leaf bank position with patient specific checks. Methods: 9 head and neck (H and N) and 14 prostate IMRT beams were delivered using MLC files containing systematic offsets ({+-}1 mm in two banks, {+-}0.5 mm in two banks, and 1 mm in one bank of leaves). The beams were measured using both MAPCHECK (Sun Nuclear Corp., Melbourne, FL) and the aS1000 electronic portal imaging device (Varian Medical Systems, Palo Alto, CA). Comparisons with calculated fields, without offsets, were made using commonly adopted criteria including absolute dose (AD) difference, relative dose difference, distance to agreement (DTA), and the gamma index. Results: The criteria most sensitive to systematic leaf bank offsets were the 3% AD, 3 mm DTA for MAPCHECK and the gamma index with 2% AD and 2 mm DTA for the EPID. The criterion based on the relative dose measurements was the least sensitive to MLC offsets. More highly modulated fields, i.e., H and N, showed greater changes in the percentage of passing points due to systematic MLC inaccuracy than prostate fields. Conclusions: None of the techniques or criteria tested is sufficiently sensitive, with the population of IMRT fields, to detect a systematic MLC offset at a clinically significant level on an individual field. Patient specific QC cannot, therefore, substitute for routine QC of the MLC itself.

  14. 'When measurements mean action' decision models for portal image review to eliminate systematic set-up errors

    International Nuclear Information System (INIS)

    Wratten, C.R.; Denham, J.W.; O; Brien, P.; Hamilton, C.S.; Kron, T.; London Regional Cancer Centre, London, Ontario

    2004-01-01

    The aim of the present paper is to evaluate how the use of decision models in the review of portal images can eliminate systematic set-up errors during conformal therapy. Sixteen patients undergoing four-field irradiation of prostate cancer have had daily portal images obtained during the first two treatment weeks and weekly thereafter. The magnitude of random and systematic variations has been calculated by comparison of the portal image with the reference simulator images using the two-dimensional decision model embodied in the Hotelling's evaluation process (HEP). Random day-to-day set-up variation was small in this group of patients. Systematic errors were, however, common. In 15 of 16 patients, one or more errors of >2 mm were diagnosed at some stage during treatment. Sixteen of the 23 errors were between 2 and 4 mm. Although there were examples of oversensitivity of the HEP in three cases, and one instance of undersensitivity, the HEP proved highly sensitive to the small (2-4 mm) systematic errors that must be eliminated during high precision radiotherapy. The HEP has proven valuable in diagnosing very small ( 4 mm) systematic errors using one-dimensional decision models, HEP can eliminate the majority of systematic errors during the first 2 treatment weeks. Copyright (2004) Blackwell Science Pty Ltd

  15. Adverse Drug Events and Medication Errors in African Hospitals: A Systematic Review.

    Science.gov (United States)

    Mekonnen, Alemayehu B; Alhawassi, Tariq M; McLachlan, Andrew J; Brien, Jo-Anne E

    2018-03-01

    Medication errors and adverse drug events are universal problems contributing to patient harm but the magnitude of these problems in Africa remains unclear. The objective of this study was to systematically investigate the literature on the extent of medication errors and adverse drug events, and the factors contributing to medication errors in African hospitals. We searched PubMed, MEDLINE, EMBASE, Web of Science and Global Health databases from inception to 31 August, 2017 and hand searched the reference lists of included studies. Original research studies of any design published in English that investigated adverse drug events and/or medication errors in any patient population in the hospital setting in Africa were included. Descriptive statistics including median and interquartile range were presented. Fifty-one studies were included; of these, 33 focused on medication errors, 15 on adverse drug events, and three studies focused on medication errors and adverse drug events. These studies were conducted in nine (of the 54) African countries. In any patient population, the median (interquartile range) percentage of patients reported to have experienced any suspected adverse drug event at hospital admission was 8.4% (4.5-20.1%), while adverse drug events causing admission were reported in 2.8% (0.7-6.4%) of patients but it was reported that a median of 43.5% (20.0-47.0%) of the adverse drug events were deemed preventable. Similarly, the median mortality rate attributed to adverse drug events was reported to be 0.1% (interquartile range 0.0-0.3%). The most commonly reported types of medication errors were prescribing errors, occurring in a median of 57.4% (interquartile range 22.8-72.8%) of all prescriptions and a median of 15.5% (interquartile range 7.5-50.6%) of the prescriptions evaluated had dosing problems. Major contributing factors for medication errors reported in these studies were individual practitioner factors (e.g. fatigue and inadequate knowledge

  16. Quaternion error-based optimal control applied to pinpoint landing

    Science.gov (United States)

    Ghiglino, Pablo

    Accurate control techniques for pinpoint planetary landing - i.e., the goal of achieving landing errors in the order of 100m for unmanned missions - is a complex problem that have been tackled in different ways in the available literature. Among other challenges, this kind of control is also affected by the well known trade-off in UAV control that for complex underlying models the control is sub-optimal, while optimal control is applied to simplifed models. The goal of this research has been the development new control algorithms that would be able to tackle these challenges and the result are two novel optimal control algorithms namely: OQTAL and HEX2OQTAL. These controllers share three key properties that are thoroughly proven and shown in this thesis; stability, accuracy and adaptability. Stability is rigorously demonstrated for both controllers. Accuracy is shown in results of comparing these novel controllers with other industry standard algorithms in several different scenarios: there is a gain in accuracy of at least 15% for each controller, and in many cases much more than that. A new tuning algorithm based on swarm heuristics optimisation was developed as well as part of this research in order to tune in an online manner the standard Proportional-Integral-Derivative (PID) controllers used for benchmarking. Finally, adaptability of these controllers can be seen as a combination of four elements: mathematical model extensibility, cost matrices tuning, reduced computation time required and finally no prior knowledge of the navigation or guidance strategies needed. Further simulations in real planetary landing trajectories has shown that these controllers have the capacity of achieving landing errors in the order of pinpoint landing requirements, making them not only very precise UAV controllers, but also potential candidates for pinpoint landing unmanned missions.

  17. On the effects of systematic errors in analysis of nuclear scattering data

    International Nuclear Information System (INIS)

    Bennett, M.T.; Steward, C.; Amos, K.; Allen, L.J.

    1995-01-01

    The effects of systematic errors on elastic scattering differential cross-section data upon the assessment of quality fits to that data have been studied. Three cases are studied, namely the differential cross-section data sets from elastic scattering of 200 MeV protons from 12 C, of 350 MeV 16 O- 16 O scattering and of 288.6 MeV 12 C- 12 C scattering. First, to estimate the probability of any unknown systematic errors, select sets of data have been processed using the method of generalized cross validation; a method based upon the premise that any data set should satisfy an optimal smoothness criterion. In another case, the S function that provided a statistically significant fit to data, upon allowance for angle variation, became overdetermined. A far simpler S function form could then be found to describe the scattering process. The S functions so obtained have been used in a fixed energy inverse scattering study to specify effective, local, Schroedinger potentials for the collisions. An error analysis has been performed on the results to specify confidence levels for those interactions. 19 refs., 6 tabs., 15 figs

  18. Utilizing measure-based feedback in control-mastery theory: A clinical error.

    Science.gov (United States)

    Snyder, John; Aafjes-van Doorn, Katie

    2016-09-01

    Clinical errors and ruptures are an inevitable part of clinical practice. Often times, therapists are unaware that a clinical error or rupture has occurred, leaving no space for repair, and potentially leading to patient dropout and/or less effective treatment. One way to overcome our blind spots is by frequently and systematically collecting measure-based feedback from the patient. Patient feedback measures that focus on the process of psychotherapy such as the Patient's Experience of Attunement and Responsiveness scale (PEAR) can be used in conjunction with treatment outcome measures such as the Outcome Questionnaire 45.2 (OQ-45.2) to monitor the patient's therapeutic experience and progress. The regular use of these types of measures can aid clinicians in the identification of clinical errors and the associated patient deterioration that might otherwise go unnoticed and unaddressed. The current case study describes an instance of clinical error that occurred during the 2-year treatment of a highly traumatized young woman. The clinical error was identified using measure-based feedback and subsequently understood and addressed from the theoretical standpoint of the control-mastery theory of psychotherapy. An alternative hypothetical response is also presented and explained using control-mastery theory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  19. MODELS OF AIR TRAFFIC CONTROLLERS ERRORS PREVENTION IN TERMINAL CONTROL AREAS UNDER UNCERTAINTY CONDITIONS

    Directory of Open Access Journals (Sweden)

    Volodymyr Kharchenko

    2017-03-01

    Full Text Available Purpose: the aim of this study is to research applied models of air traffic controllers’ errors prevention in terminal control areas (TMA under uncertainty conditions. In this work the theoretical framework descripting safety events and errors of air traffic controllers connected with the operations in TMA is proposed. Methods: optimisation of terminal control area formal description based on the Threat and Error management model and the TMA network model of air traffic flows. Results: the human factors variables associated with safety events in work of air traffic controllers under uncertainty conditions were obtained. The Threat and Error management model application principles to air traffic controller operations and the TMA network model of air traffic flows were proposed. Discussion: Information processing context for preventing air traffic controller errors, examples of threats in work of air traffic controllers, which are relevant for TMA operations under uncertainty conditions.

  20. Error message recording and reporting in the SLC control system

    International Nuclear Information System (INIS)

    Spencer, N.; Bogart, J.; Phinney, N.; Thompson, K.

    1985-01-01

    Error or information messages that are signaled by control software either in the VAX host computer or the local microprocessor clusters are handled by a dedicated VAX process (PARANOIA). Messages are recorded on disk for further analysis and displayed at the appropriate console. Another VAX process (ERRLOG) can be used to sort, list and histogram various categories of messages. The functions performed by these processes and the algorithms used are discussed

  1. Error message recording and reporting in the SLC control system

    International Nuclear Information System (INIS)

    Spencer, N.; Bogart, J.; Phinney, N.; Thompson, K.

    1985-04-01

    Error or information messages that are signaled by control software either in the VAX host computer or the local microprocessor clusters are handled by a dedicated VAX process (PARANOIA). Messages are recorded on disk for further analysis and displayed at the appropriate console. Another VAX process (ERRLOG) can be used to sort, list and histogram various categories of messages. The functions performed by these processes and the algorithms used are discussed

  2. Nonlinear control of ships minimizing the position tracking errors

    Directory of Open Access Journals (Sweden)

    Svein P. Berge

    1999-07-01

    Full Text Available In this paper, a nonlinear tracking controller with integral action for ships is presented. The controller is based on state feedback linearization. Exponential convergence of the vessel-fixed position and velocity errors are proven by using Lyapunov stability theory. Since we only have two control devices, a rudder and a propeller, we choose to control the longship and the sideship position errors to zero while the heading is stabilized indirectly. A Virtual Reference Point (VRP is defined at the bow or ahead of the ship. The VRP is used for tracking control. It is shown that the distance from the center of rotation to the VRP will influence on the stability of the zero dynamics. By selecting the VRP at the bow or even ahead of the bow, the damping in yaw can be increased and the zero dynamics is stabilized. Hence, the heading angle will be less sensitive to wind, currents and waves. The control law is simulated by using a nonlinear model of the Japanese training ship Shiojimaru with excellent results. Wind forces are added to demonstrate the robustness and performance of the integral controller.

  3. Coordinated joint motion control system with position error correction

    Science.gov (United States)

    Danko, George L.

    2016-04-05

    Disclosed are an articulated hydraulic machine supporting, control system and control method for same. The articulated hydraulic machine has an end effector for performing useful work. The control system is capable of controlling the end effector for automated movement along a preselected trajectory. The control system has a position error correction system to correct discrepancies between an actual end effector trajectory and a desired end effector trajectory. The correction system can employ one or more absolute position signals provided by one or more acceleration sensors supported by one or more movable machine elements. Good trajectory positioning and repeatability can be obtained. A two joystick controller system is enabled, which can in some cases facilitate the operator's task and enhance their work quality and productivity.

  4. Unaccounted source of systematic errors in measurements of the Newtonian gravitational constant G

    International Nuclear Information System (INIS)

    DeSalvo, Riccardo

    2015-01-01

    Many precision measurements of G have produced a spread of results incompatible with measurement errors. Clearly an unknown source of systematic errors is at work. It is proposed here that most of the discrepancies derive from subtle deviations from Hooke's law, caused by avalanches of entangled dislocations. The idea is supported by deviations from linearity reported by experimenters measuring G, similarly to what is observed, on a larger scale, in low-frequency spring oscillators. Some mitigating experimental apparatus modifications are suggested. - Highlights: • Source of discrepancies on universal gravitational constant G measurements. • Collective motion of dislocations results in breakdown of Hook's law. • Self-organized criticality produce non-predictive shifts of equilibrium point. • New dissipation mechanism different from loss angle and viscous models is necessary. • Mitigation measures proposed may bring coherence to the measurements of G

  5. Unequal error control scheme for dimmable visible light communication systems

    Science.gov (United States)

    Deng, Keyan; Yuan, Lei; Wan, Yi; Li, Huaan

    2017-01-01

    Visible light communication (VLC), which has the advantages of a very large bandwidth, high security, and freedom from license-related restrictions and electromagnetic-interference, has attracted much interest. Because a VLC system simultaneously performs illumination and communication functions, dimming control, efficiency, and reliable transmission are significant and challenging issues of such systems. In this paper, we propose a novel unequal error control (UEC) scheme in which expanding window fountain (EWF) codes in an on-off keying (OOK)-based VLC system are used to support different dimming target values. To evaluate the performance of the scheme for various dimming target values, we apply it to H.264 scalable video coding bitstreams in a VLC system. The results of the simulations that are performed using additive white Gaussian noises (AWGNs) with different signal-to-noise ratios (SNRs) are used to compare the performance of the proposed scheme for various dimming target values. It is found that the proposed UEC scheme enables earlier base layer recovery compared to the use of the equal error control (EEC) scheme for different dimming target values and therefore afford robust transmission for scalable video multicast over optical wireless channels. This is because of the unequal error protection (UEP) and unequal recovery time (URT) of the EWF code in the proposed scheme.

  6. Reliability and Measurement Error of Tensiomyography to Assess Mechanical Muscle Function: A Systematic Review.

    Science.gov (United States)

    Martín-Rodríguez, Saúl; Loturco, Irineu; Hunter, Angus M; Rodríguez-Ruiz, David; Munguia-Izquierdo, Diego

    2017-12-01

    Martín-Rodríguez, S, Loturco, I, Hunter, AM, Rodríguez-Ruiz, D, and Munguia-Izquierdo, D. Reliability and measurement error of tensiomyography to assess mechanical muscle function: A systematic review. J Strength Cond Res 31(12): 3524-3536, 2017-Interest in studying mechanical skeletal muscle function through tensiomyography (TMG) has increased in recent years. This systematic review aimed to (a) report the reliability and measurement error of all TMG parameters (i.e., maximum radial displacement of the muscle belly [Dm], contraction time [Tc], delay time [Td], half-relaxation time [½ Tr], and sustained contraction time [Ts]) and (b) to provide critical reflection on how to perform accurate and appropriate measurements for informing clinicians, exercise professionals, and researchers. A comprehensive literature search was performed of the Pubmed, Scopus, Science Direct, and Cochrane databases up to July 2017. Eight studies were included in this systematic review. Meta-analysis could not be performed because of the low quality of the evidence of some studies evaluated. Overall, the review of the 9 studies involving 158 participants revealed high relative reliability (intraclass correlation coefficient [ICC]) for Dm (0.91-0.99); moderate-to-high ICC for Ts (0.80-0.96), Tc (0.70-0.98), and ½ Tr (0.77-0.93); and low-to-high ICC for Td (0.60-0.98), independently of the evaluated muscles. In addition, absolute reliability (coefficient of variation [CV]) was low for all TMG parameters except for ½ Tr (CV = >20%), whereas measurement error indexes were high for this parameter. In conclusion, this study indicates that 3 of the TMG parameters (Dm, Td, and Tc) are highly reliable, whereas ½ Tr demonstrate insufficient reliability, and thus should not be used in future studies.

  7. Comparison of two stochastic techniques for reliable urban runoff prediction by modeling systematic errors

    DEFF Research Database (Denmark)

    Del Giudice, Dario; Löwe, Roland; Madsen, Henrik

    2015-01-01

    from different fields and have not yet been compared in environmental modeling. To compare the two approaches, we develop a unifying terminology, evaluate them theoretically, and apply them to conceptual rainfall-runoff modeling in the same drainage system. Our results show that both approaches can......In urban rainfall-runoff, commonly applied statistical techniques for uncertainty quantification mostly ignore systematic output errors originating from simplified models and erroneous inputs. Consequently, the resulting predictive uncertainty is often unreliable. Our objective is to present two...... approaches which use stochastic processes to describe systematic deviations and to discuss their advantages and drawbacks for urban drainage modeling. The two methodologies are an external bias description (EBD) and an internal noise description (IND, also known as stochastic gray-box modeling). They emerge...

  8. Noncontact thermometry via laser pumped, thermographic phosphors: Characterization of systematic errors and industrial applications

    International Nuclear Information System (INIS)

    Gillies, G.T.; Dowell, L.J.; Lutz, W.N.; Allison, S.W.; Cates, M.R.; Noel, B.W.; Franks, L.A.; Borella, H.M.

    1987-10-01

    There are a growing number of industrial measurement situations that call for a high precision, noncontact method of thermometry. Our collaboration has been successful in developing one such method based on the laser-induced fluorescence of rare-earth-doped ceramic phosphors like Y 2 O 3 :Eu. In this paper, we summarize the results of characterization studies aimed at identifying the sources of systematic error in a laboratory-grade version of the method. We then go on to present data from measurements made in the afterburner plume of a jet turbine and inside an operating permanent magnet motor. 12 refs., 6 figs

  9. Improved model predictive control of resistive wall modes by error field estimator in EXTRAP T2R

    Science.gov (United States)

    Setiadi, A. C.; Brunsell, P. R.; Frassinetti, L.

    2016-12-01

    Many implementations of a model-based approach for toroidal plasma have shown better control performance compared to the conventional type of feedback controller. One prerequisite of model-based control is the availability of a control oriented model. This model can be obtained empirically through a systematic procedure called system identification. Such a model is used in this work to design a model predictive controller to stabilize multiple resistive wall modes in EXTRAP T2R reversed-field pinch. Model predictive control is an advanced control method that can optimize the future behaviour of a system. Furthermore, this paper will discuss an additional use of the empirical model which is to estimate the error field in EXTRAP T2R. Two potential methods are discussed that can estimate the error field. The error field estimator is then combined with the model predictive control and yields better radial magnetic field suppression.

  10. Control of Human Error and comparison Level risk after correction action With the SHERPA Method in a control Room of petrochemical industry

    Directory of Open Access Journals (Sweden)

    A. Zakerian

    2011-12-01

    Full Text Available Background and aims Today in many jobs like nuclear, military and chemical industries, human errors may result in a disaster. Accident in different places of the world emphasizes this subject and we indicate for example, Chernobyl disaster in (1986, tree Mile accident in (1974 and Flixborough explosion in (1974.So human errors identification especially in important and intricate systems is necessary and unavoidable for predicting control methods.   Methods Recent research is a case study and performed in Zagross Methanol Company in Asalouye (South pars.   Walking –Talking through method with process expert and control room operators, inspecting technical documents are used for collecting required information and completing Systematic Human Error Reductive and Predictive Approach (SHERPA worksheets.   Results analyzing SHERPA worksheet indicated that, were accepting capable invertebrate errors % 71.25, % 26.75 undesirable errors, % 2 accepting capable(with change errors, % 0 accepting capable errors, and after correction action forecast Level risk to this arrangement, accepting capable invertebrate errors % 0, % 4.35 undesirable errors , % 58.55 accepting capable(with change errors, % 37.1 accepting capable errors .   ConclusionFinally this result is comprehension that this method in different industries especially in chemical industries is enforceable and useful for human errors identification that may lead to accident and adventures.

  11. Human error mode identification for NPP main control room operations using soft controls

    International Nuclear Information System (INIS)

    Lee, Seung Jun; Kim, Jaewhan; Jang, Seung-Cheol

    2011-01-01

    The operation environment of main control rooms (MCRs) in modern nuclear power plants (NPPs) has considerably changed over the years. Advanced MCRs, which have been designed by adapting digital and computer technologies, have simpler interfaces using large display panels, computerized displays, soft controls, computerized procedure systems, and so on. The actions for the NPP operations are performed using soft controls in advanced MCRs. Soft controls have different features from conventional controls. Operators need to navigate the screens to find indicators and controls and manipulate controls using a mouse, touch screens, and so on. Due to these different interfaces, different human errors should be considered in the human reliability analysis (HRA) for advanced MCRs. In this work, human errors that could occur during operation executions using soft controls were analyzed. This work classified the human errors in soft controls into six types, and the reasons that affect the occurrence of the human errors were also analyzed. (author)

  12. Errors in patient specimen collection: application of statistical process control.

    Science.gov (United States)

    Dzik, Walter Sunny; Beckman, Neil; Selleng, Kathleen; Heddle, Nancy; Szczepiorkowski, Zbigniew; Wendel, Silvano; Murphy, Michael

    2008-10-01

    Errors in the collection and labeling of blood samples for pretransfusion testing increase the risk of transfusion-associated patient morbidity and mortality. Statistical process control (SPC) is a recognized method to monitor the performance of a critical process. An easy-to-use SPC method was tested to determine its feasibility as a tool for monitoring quality in transfusion medicine. SPC control charts were adapted to a spreadsheet presentation. Data tabulating the frequency of mislabeled and miscollected blood samples from 10 hospitals in five countries from 2004 to 2006 were used to demonstrate the method. Control charts were produced to monitor process stability. The participating hospitals found the SPC spreadsheet very suitable to monitor the performance of the sample labeling and collection and applied SPC charts to suit their specific needs. One hospital monitored subcategories of sample error in detail. A large hospital monitored the number of wrong-blood-in-tube (WBIT) events. Four smaller-sized facilities, each following the same policy for sample collection, combined their data on WBIT samples into a single control chart. One hospital used the control chart to monitor the effect of an educational intervention. A simple SPC method is described that can monitor the process of sample collection and labeling in any hospital. SPC could be applied to other critical steps in the transfusion processes as a tool for biovigilance and could be used to develop regional or national performance standards for pretransfusion sample collection. A link is provided to download the spreadsheet for free.

  13. The using of the control room automation against human errors

    International Nuclear Information System (INIS)

    Kautto, A.

    1993-01-01

    The control room automation has developed very strongly during the 80's in IVO (Imatran Voima Oy). The former work expanded strongly with building of the full scope training simulator to the Loviisa plant. The important milestones has been, for example the testing of the Critical Function Monitoring System, a concept developed by Combustion Eng. Inc., in Loviisa training simulator 1982, the replacing of the process and simulator computers in Loviisa 1989, and 1990 and the presenting the use of the computer based procedures in training of operators 1993. With developing of automation and procedures it is possible to minimize the probability of human error. However, it is not possible totally eliminate the risks caused by human errors. (orig.)

  14. Carers' Medication Administration Errors in the Domiciliary Setting: A Systematic Review.

    Directory of Open Access Journals (Sweden)

    Anam Parand

    Full Text Available Medications are mostly taken in patients' own homes, increasingly administered by carers, yet studies of medication safety have been largely conducted in the hospital setting. We aimed to review studies of how carers cause and/or prevent medication administration errors (MAEs within the patient's home; to identify types, prevalence and causes of these MAEs and any interventions to prevent them.A narrative systematic review of literature published between 1 Jan 1946 and 23 Sep 2013 was carried out across the databases EMBASE, MEDLINE, PSYCHINFO, COCHRANE and CINAHL. Empirical studies were included where carers were responsible for preventing/causing MAEs in the home and standardised tools used for data extraction and quality assessment.Thirty-six papers met the criteria for narrative review, 33 of which included parents caring for children, two predominantly comprised adult children and spouses caring for older parents/partners, and one focused on paid carers mostly looking after older adults. The carer administration error rate ranged from 1.9 to 33% of medications administered and from 12 to 92.7% of carers administering medication. These included dosage errors, omitted administration, wrong medication and wrong time or route of administration. Contributory factors included individual carer factors (e.g. carer age, environmental factors (e.g. storage, medication factors (e.g. number of medicines, prescription communication factors (e.g. comprehensibility of instructions, psychosocial factors (e.g. carer-to-carer communication, and care-recipient factors (e.g. recipient age. The few interventions effective in preventing MAEs involved carer training and tailored equipment.This review shows that home medication administration errors made by carers are a potentially serious patient safety issue. Carers made similar errors to those made by professionals in other contexts and a wide variety of contributory factors were identified. The home care

  15. The impact of treatment complexity and computer-control delivery technology on treatment delivery errors

    International Nuclear Information System (INIS)

    Fraass, Benedick A.; Lash, Kathy L.; Matrone, Gwynne M.; Volkman, Susan K.; McShan, Daniel L.; Kessler, Marc L.; Lichter, Allen S.

    1998-01-01

    machines. Results: The overall reported error rate (all treatments, machines) was 0.13% per segment, or 0.44% per treatment session. The rate (per machine) depended on automation and plan complexity. The error rates per segment for machines M1 through M4 were 0.16%, 0.27%, 0.12%, 0.05%, respectively, while plan complexity increased from M1 up to machine M4. Machine M4 (the most complex plans and automation) had the lowest error rate. The error rate decreased with increasing automation in spite of increasing plan complexity, while for the manual machines, the error rate increased with complexity. Note that the real error rates on the two manual machines are likely to be higher than shown here (due to unnoticed and/or unreported errors), while (particularly on M4) virtually all random treatment delivery errors were noted by the CCRS system and related QA checks (including routine checks of machine and table readouts for each treatment). Treatment delivery times averaged from 14 min to 23 min per plan, and depended on the number of segments/plan, although this analysis is complicated by other factors. Conclusion: Use of a sophisticated computer-controlled delivery system for routine patient treatments with complex 3D conformal plans has led to a decrease in treatment delivery errors, while at the same time allowing delivery of increasingly complex and sophisticated conformal plans with little increase in treatment time. With renewed vigilance for the possibility of systematic problems, it is clear that use of complete and integrated computer-controlled delivery systems can provide improvements in treatment delivery, since more complex plans can be delivered with fewer errors, and without increasing treatment time

  16. Design of a real-time spectroscopic rotating compensator ellipsometer without systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Broch, Laurent, E-mail: laurent.broch@univ-lorraine.fr [Laboratoire de Chimie Physique-Approche Multi-echelle des Milieux Complexes (LCP-A2MC, EA 4632), Universite de Lorraine, 1 boulevard Arago CP 87811, F-57078 Metz Cedex 3 (France); Stein, Nicolas [Institut Jean Lamour, Universite de Lorraine, UMR 7198 CNRS, 1 boulevard Arago CP 87811, F-57078 Metz Cedex 3 (France); Zimmer, Alexandre [Laboratoire Interdisciplinaire Carnot de Bourgogne, UMR 6303 CNRS, Universite de Bourgogne, 9 avenue Alain Savary BP 47870, F-21078 Dijon Cedex (France); Battie, Yann; Naciri, Aotmane En [Laboratoire de Chimie Physique-Approche Multi-echelle des Milieux Complexes (LCP-A2MC, EA 4632), Universite de Lorraine, 1 boulevard Arago CP 87811, F-57078 Metz Cedex 3 (France)

    2014-11-28

    We describe a spectroscopic ellipsometer in the visible domain (400–800 nm) based on a rotating compensator technology using two detectors. The classical analyzer is replaced by a fixed Rochon birefringent beamsplitter which splits the incidence light wave into two perpendicularly polarized waves, one oriented at + 45° and the other one at − 45° according to the plane of incidence. Both emergent optical signals are analyzed by two identical CCD detectors which are synchronized by an optical encoder fixed on the shaft of the step-by-step motor of the compensator. The final spectrum is the result of the two averaged Ψ and Δ spectra acquired by both detectors. We show that Ψ and Δ spectra are acquired without systematic errors on a spectral range fixed from 400 to 800 nm. The acquisition time can be adjusted down to 25 ms. The setup was validated by monitoring the first steps of bismuth telluride film electrocrystallization. The results exhibit that induced experimental growth parameters, such as film thickness and volumic fraction of deposited material can be extracted with a better trueness. - Highlights: • High-speed rotating compensator ellipsometer equipped with 2 detectors. • Ellipsometric angles without systematic errors • In-situ monitoring of electrocrystallization of bismuth telluride thin layer • High-accuracy of fitted physical parameters.

  17. Systematic errors due to linear congruential random-number generators with the Swendsen-Wang algorithm: a warning.

    Science.gov (United States)

    Ossola, Giovanni; Sokal, Alan D

    2004-08-01

    We show that linear congruential pseudo-random-number generators can cause systematic errors in Monte Carlo simulations using the Swendsen-Wang algorithm, if the lattice size is a multiple of a very large power of 2 and one random number is used per bond. These systematic errors arise from correlations within a single bond-update half-sweep. The errors can be eliminated (or at least radically reduced) by updating the bonds in a random order or in an aperiodic manner. It also helps to use a generator of large modulus (e.g., 60 or more bits).

  18. An Examination of the Spatial Distribution of Carbon Dioxide and Systematic Errors

    Science.gov (United States)

    Coffey, Brennan; Gunson, Mike; Frankenberg, Christian; Osterman, Greg

    2011-01-01

    The industrial period and modern age is characterized by combustion of coal, oil, and natural gas for primary energy and transportation leading to rising levels of atmospheric of CO2. This increase, which is being carefully measured, has ramifications throughout the biological world. Through remote sensing, it is possible to measure how many molecules of CO2 lie in a defined column of air. However, other gases and particles are present in the atmosphere, such as aerosols and water, which make such measurements more complicated1. Understanding the detailed geometry and path length of the observation is vital to computing the concentration of CO2. Comparing these satellite readings with ground-truth data (TCCON) the systematic errors arising from these sources can be assessed. Once the error is understood, it can be scaled for in the retrieval algorithms to create a set of data, which is closer to the TCCON measurements1. Using this process, the algorithms are being developed to reduce bias, within.1% worldwide of the true value. At this stage, the accuracy is within 1%, but through correcting small errors contained in the algorithms, such as accounting for the scattering of sunlight, the desired accuracy can be achieved.

  19. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-Area Sky Surveys

    Energy Technology Data Exchange (ETDEWEB)

    Li, T. S. [et al.

    2016-05-27

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is stable in time and uniform over the sky to 1% precision or better. Past surveys have achieved photometric precision of 1-2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors using photometry from the Dark Energy Survey (DES) as an example. We define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes, when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the systematic chromatic errors caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane, can be up to 2% in some bandpasses. We compare the calculated systematic chromatic errors with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput. The residual after correction is less than 0.3%. We also find that the errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.

  20. Barriers to reporting medication errors and near misses among nurses: A systematic review.

    Science.gov (United States)

    Vrbnjak, Dominika; Denieffe, Suzanne; O'Gorman, Claire; Pajnkihar, Majda

    2016-11-01

    To explore barriers to nurses' reporting of medication errors and near misses in hospital settings. Systematic review. Medline, CINAHL, PubMed and Cochrane Library in addition to Google and Google Scholar and reference lists of relevant studies published in English between January 1981 and April 2015 were searched for relevant qualitative, quantitative or mixed methods empirical studies or unpublished PhD theses. Papers with a primary focus on barriers to reporting medication errors and near misses in nursing were included. The titles and abstracts of the search results were assessed for eligibility and relevance by one of the authors. After retrieval of the full texts, two of the authors independently made decisions concerning the final inclusion and these were validated by the third reviewer. Three authors independently assessed methodological quality of studies. Relevant data were extracted and findings were synthesised using thematic synthesis. From 4038 identified records, 38 studies were included in the synthesis. Findings suggest that organizational barriers such as culture, the reporting system and management behaviour in addition to personal and professional barriers such as fear, accountability and characteristics of nurses are barriers to reporting medication errors. To overcome reported barriers it is necessary to develop a non-blaming, non-punitive and non-fearful learning culture at unit and organizational level. Anonymous, effective, uncomplicated and efficient reporting systems and supportive management behaviour that provides open feedback to nurses is needed. Nurses are accountable for patients' safety, so they need to be educated and skilled in error management. Lack of research into barriers to reporting of near misses' and low awareness of reporting suggests the need for further research and development of educational and management approaches to overcome these barriers. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Statistical method for quality control in presence of measurement errors

    International Nuclear Information System (INIS)

    Lauer-Peccoud, M.R.

    1998-01-01

    In a quality inspection of a set of items where the measurements of values of a quality characteristic of the item are contaminated by random errors, one can take wrong decisions which are damageable to the quality. So of is important to control the risks in such a way that a final quality level is insured. We consider that an item is defective or not if the value G of its quality characteristic is larger or smaller than a given level g. We assume that, due to the lack of precision of the measurement instrument, the measurement M of this characteristic is expressed by ∫ (G) + ξ where f is an increasing function such that the value ∫ (g 0 ) is known and ξ is a random error with mean zero and given variance. First we study the problem of the determination of a critical measure m such that a specified quality target is reached after the classification of a lot of items where each item is accepted or rejected depending on whether its measurement is smaller or greater than m. Then we analyse the problem of testing the global quality of a lot from the measurements for a example of items taken from the lot. For these two kinds of problems and for different quality targets, we propose solutions emphasizing on the case where the function ∫ is linear and the error ξ and the variable G are Gaussian. Simulation results allow to appreciate the efficiency of the different considered control procedures and their robustness with respect to deviations from the assumptions used in the theoretical derivations. (author)

  2. Modeling Human Error Mechanism for Soft Control in Advanced Control Rooms (ACRs)

    International Nuclear Information System (INIS)

    Aljneibi, Hanan Salah Ali; Ha, Jun Su; Kang, Seongkeun; Seong, Poong Hyun

    2015-01-01

    To achieve the switch from conventional analog-based design to digital design in ACRs, a large number of manual operating controls and switches have to be replaced by a few common multi-function devices which is called soft control system. The soft controls in APR-1400 ACRs are classified into safety-grade and non-safety-grade soft controls; each was designed using different and independent input devices in ACRs. The operations using soft controls require operators to perform new tasks which were not necessary in conventional controls such as navigating computerized displays to monitor plant information and control devices. These kinds of computerized displays and soft controls may make operations more convenient but they might cause new types of human error. In this study the human error mechanism during the soft controls is studied and modeled to be used for analysis and enhancement of human performance (or human errors) during NPP operation. The developed model would contribute to a lot of applications to improve human performance (or reduce human errors), HMI designs, and operators' training program in ACRs. The developed model of human error mechanism for the soft control is based on assumptions that a human operator has certain amount of capacity in cognitive resources and if resources required by operating tasks are greater than resources invested by the operator, human error (or poor human performance) is likely to occur (especially in 'slip'); good HMI (Human-machine Interface) design decreases the required resources; operator's skillfulness decreases the required resources; and high vigilance increases the invested resources. In this study the human error mechanism during the soft controls is studied and modeled to be used for analysis and enhancement of human performance (or reduction of human errors) during NPP operation

  3. Modeling Human Error Mechanism for Soft Control in Advanced Control Rooms (ACRs)

    Energy Technology Data Exchange (ETDEWEB)

    Aljneibi, Hanan Salah Ali [Khalifa Univ., Abu Dhabi (United Arab Emirates); Ha, Jun Su; Kang, Seongkeun; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of)

    2015-10-15

    To achieve the switch from conventional analog-based design to digital design in ACRs, a large number of manual operating controls and switches have to be replaced by a few common multi-function devices which is called soft control system. The soft controls in APR-1400 ACRs are classified into safety-grade and non-safety-grade soft controls; each was designed using different and independent input devices in ACRs. The operations using soft controls require operators to perform new tasks which were not necessary in conventional controls such as navigating computerized displays to monitor plant information and control devices. These kinds of computerized displays and soft controls may make operations more convenient but they might cause new types of human error. In this study the human error mechanism during the soft controls is studied and modeled to be used for analysis and enhancement of human performance (or human errors) during NPP operation. The developed model would contribute to a lot of applications to improve human performance (or reduce human errors), HMI designs, and operators' training program in ACRs. The developed model of human error mechanism for the soft control is based on assumptions that a human operator has certain amount of capacity in cognitive resources and if resources required by operating tasks are greater than resources invested by the operator, human error (or poor human performance) is likely to occur (especially in 'slip'); good HMI (Human-machine Interface) design decreases the required resources; operator's skillfulness decreases the required resources; and high vigilance increases the invested resources. In this study the human error mechanism during the soft controls is studied and modeled to be used for analysis and enhancement of human performance (or reduction of human errors) during NPP operation.

  4. Error analysis of acceleration control loops of a synchrotron

    International Nuclear Information System (INIS)

    Zhang, S.Y.; Weng, W.T.

    1991-01-01

    For beam control during acceleration, it is conventional to derive the frequency from an external reference, be it a field marker or an external oscillator, to provide phase and radius feedback loops to ensure the phase stability, radial position and emittance integrity of the beam. The open and closed loop behaviors of both feedback control and their response under the possible frequency, phase and radius errors are derived from fundamental principles and equations. The stability of the loops is investigated under a wide range of variations of the gain and time delays. Actual system performance of the AGS Booster is analyzed and compared to commissioning experiences. Such analysis is useful for setting design criteria and tolerances for new proton synchrotrons. 4 refs., 13 figs

  5. Managing Systematic Errors in a Polarimeter for the Storage Ring EDM Experiment

    Science.gov (United States)

    Stephenson, Edward J.; Storage Ring EDM Collaboration

    2011-05-01

    The EDDA plastic scintillator detector system at the Cooler Synchrotron (COSY) has been used to demonstrate that it is possible using a thick target at the edge of the circulating beam to meet the requirements for a polarimeter to be used in the search for an electric dipole moment on the proton or deuteron. Emphasizing elastic and low Q-value reactions leads to large analyzing powers and, along with thick targets, to efficiencies near 1%. Using only information obtained comparing count rates for oppositely vector-polarized beam states and a calibration of the sensitivity of the polarimeter to rate and geometric changes, the contribution of systematic errors can be suppressed below the level of one part per million.

  6. Simulating systematic errors in X-ray absorption spectroscopy experiments: Sample and beam effects

    Energy Technology Data Exchange (ETDEWEB)

    Curis, Emmanuel [Laboratoire de Biomathematiques, Faculte de Pharmacie, Universite Rene, Descartes (Paris V)-4, Avenue de l' Observatoire, 75006 Paris (France)]. E-mail: emmanuel.curis@univ-paris5.fr; Osan, Janos [KFKI Atomic Energy Research Institute (AEKI)-P.O. Box 49, H-1525 Budapest (Hungary); Falkenberg, Gerald [Hamburger Synchrotronstrahlungslabor (HASYLAB), Deutsches Elektronen-Synchrotron (DESY)-Notkestrasse 85, 22607 Hamburg (Germany); Benazeth, Simone [Laboratoire de Biomathematiques, Faculte de Pharmacie, Universite Rene, Descartes (Paris V)-4, Avenue de l' Observatoire, 75006 Paris (France); Laboratoire d' Utilisation du Rayonnement Electromagnetique (LURE)-Ba-hat timent 209D, Campus d' Orsay, 91406 Orsay (France); Toeroek, Szabina [KFKI Atomic Energy Research Institute (AEKI)-P.O. Box 49, H-1525 Budapest (Hungary)

    2005-07-15

    The article presents an analytical model to simulate experimental imperfections in the realization of an X-ray absorption spectroscopy experiment, performed in transmission or fluorescence mode. Distinction is made between sources of systematic errors on a time-scale basis, to select the more appropriate model for their handling. For short time-scale, statistical models are the most suited. For large time-scale, the model is developed for sample and beam imperfections: mainly sample inhomogeneity, sample self-absorption, beam achromaticity. The ability of this model to reproduce the effects of these imperfections is exemplified, and the model is validated on real samples. Various potential application fields of the model are then presented.

  7. Simulating systematic errors in X-ray absorption spectroscopy experiments: Sample and beam effects

    International Nuclear Information System (INIS)

    Curis, Emmanuel; Osan, Janos; Falkenberg, Gerald; Benazeth, Simone; Toeroek, Szabina

    2005-01-01

    The article presents an analytical model to simulate experimental imperfections in the realization of an X-ray absorption spectroscopy experiment, performed in transmission or fluorescence mode. Distinction is made between sources of systematic errors on a time-scale basis, to select the more appropriate model for their handling. For short time-scale, statistical models are the most suited. For large time-scale, the model is developed for sample and beam imperfections: mainly sample inhomogeneity, sample self-absorption, beam achromaticity. The ability of this model to reproduce the effects of these imperfections is exemplified, and the model is validated on real samples. Various potential application fields of the model are then presented

  8. Control of error and convergence in ODE solvers

    International Nuclear Information System (INIS)

    Gustafsson, K.

    1992-03-01

    Feedback is a general principle that can be used in many different contexts. In this thesis it is applied to numerical integration of ordinary differential equations. An advanced integration method includes parameters and variables that should be adjusted during the execution. In addition, the integration method should be able to automatically handle situations such as: initialization, restart after failures, etc. In this thesis we regard the algorithms for parameter adjustment and supervision as a controller. The controlled measures different variable that tell the current status of the integration, and based on this information it decides how to continue. The design of the controller is vital in order to accurately and efficiently solve a large class of ordinary differential equations. The application of feedback control may appear farfetched, but numerical integration methods are in fact dynamical systems. This is often overlooked in traditional numerical analysis. We derive dynamic models that describe the behavior of the integration method as well as the standard control algorithms in use today. Using these models it is possible to analyze properties of current algorithms, and also explain some generally observed misbehaviors. Further, we use the acquired insight to derive new and improved control algorithms, both for explicit and implicit Runge-Kutta methods. In the explicit case, the new controller gives good overall performance. In particular it overcomes the problem with oscillating stepsize sequence that is often experienced when the stepsize is restricted by numerical stability. The controller for implicit methods is designed so that it tracks changes in the differential equation better than current algorithms. In addition, it includes a new strategy for the equation solver, which allows the stepsize to vary more freely. This leads to smoother error control without excessive operations on the iteration matrix. (87 refs.) (au)

  9. Towards automatic global error control: Computable weak error expansion for the tau-leap method

    KAUST Repository

    Karlsson, Peer Jesper; Tempone, Raul

    2011-01-01

    This work develops novel error expansions with computable leading order terms for the global weak error in the tau-leap discretization of pure jump processes arising in kinetic Monte Carlo models. Accurate computable a posteriori error approximations are the basis for adaptive algorithms, a fundamental tool for numerical simulation of both deterministic and stochastic dynamical systems. These pure jump processes are simulated either by the tau-leap method, or by exact simulation, also referred to as dynamic Monte Carlo, the Gillespie Algorithm or the Stochastic Simulation Slgorithm. Two types of estimates are presented: an a priori estimate for the relative error that gives a comparison between the work for the two methods depending on the propensity regime, and an a posteriori estimate with computable leading order term. © de Gruyter 2011.

  10. Human-simulation-based learning to prevent medication error: A systematic review.

    Science.gov (United States)

    Sarfati, Laura; Ranchon, Florence; Vantard, Nicolas; Schwiertz, Vérane; Larbre, Virginie; Parat, Stéphanie; Faudel, Amélie; Rioufol, Catherine

    2018-01-31

    In the past 2 decades, there has been an increasing interest in simulation-based learning programs to prevent medication error (ME). To improve knowledge, skills, and attitudes in prescribers, nurses, and pharmaceutical staff, these methods enable training without directly involving patients. However, best practices for simulation for healthcare providers are as yet undefined. By analysing the current state of experience in the field, the present review aims to assess whether human simulation in healthcare helps to reduce ME. A systematic review was conducted on Medline from 2000 to June 2015, associating the terms "Patient Simulation," "Medication Errors," and "Simulation Healthcare." Reports of technology-based simulation were excluded, to focus exclusively on human simulation in nontechnical skills learning. Twenty-one studies assessing simulation-based learning programs were selected, focusing on pharmacy, medicine or nursing students, or concerning programs aimed at reducing administration or preparation errors, managing crises, or learning communication skills for healthcare professionals. The studies varied in design, methodology, and assessment criteria. Few demonstrated that simulation was more effective than didactic learning in reducing ME. This review highlights a lack of long-term assessment and real-life extrapolation, with limited scenarios and participant samples. These various experiences, however, help in identifying the key elements required for an effective human simulation-based learning program for ME prevention: ie, scenario design, debriefing, and perception assessment. The performance of these programs depends on their ability to reflect reality and on professional guidance. Properly regulated simulation is a good way to train staff in events that happen only exceptionally, as well as in standard daily activities. By integrating human factors, simulation seems to be effective in preventing iatrogenic risk related to ME, if the program is

  11. Post-error action control is neurobehaviorally modulated under conditions of constant speeded response

    Directory of Open Access Journals (Sweden)

    Takahiro eSoshi

    2015-01-01

    Full Text Available Post-error slowing is an error recovery strategy that contributes to action control, and occurs after errors in order to prevent future behavioral flaws. Error recovery often malfunctions in clinical populations, but the relationship between behavioral traits and recovery from error is unclear in healthy populations. The present study investigated the relationship between impulsivity and error recovery by simulating a speeded response situation using a Go/No-go paradigm that forced the participants to constantly make accelerated responses prior to stimuli disappearance (stimulus duration: 250 ms. Neural correlates of post-error processing were examined using event-related potentials (ERPs. Impulsivity traits were measured with self-report questionnaires (BIS-11, BIS/BAS. Behavioral results demonstrated that the commission error for No-go trials was 15%, but post-error slowing did not take place immediately. Delayed post-error slowing was negatively correlated with error rates and impulsivity traits, showing that response slowing was associated with reduced error rates and changed with impulsivity. Response-locked error ERPs were clearly observed for the error trials. Contrary to previous studies, error ERPs were not significantly related to post-error slowing. Stimulus-locked N2 was negatively correlated with post-error slowing and positively correlated with impulsivity traits at the second post-error Go trial: larger N2 activity was associated with greater post-error slowing and less impulsivity. In summary, under constant speeded conditions, error monitoring was dissociated from post-error action control, and post-error slowing did not occur quickly. Furthermore, post-error slowing and its neural correlate (N2 were modulated by impulsivity traits. These findings suggest that there may be clinical and practical efficacy of maintaining cognitive control of actions during error recovery under common daily environments that frequently evoke

  12. Systematic analysis of dependent human errors from the maintenance history at finnish NPPs - A status report

    Energy Technology Data Exchange (ETDEWEB)

    Laakso, K. [VTT Industrial Systems (Finland)

    2002-12-01

    Operating experience has shown missed detection events, where faults have passed inspections and functional tests to operating periods after the maintenance activities during the outage. The causes of these failures have often been complex event sequences, involving human and organisational factors. Especially common cause and other dependent failures of safety systems may significantly contribute to the reactor core damage risk. The topic has been addressed in the Finnish studies of human common cause failures, where experiences on latent human errors have been searched and analysed in detail from the maintenance history. The review of the bulk of the analysis results of the Olkiluoto and Loviisa plant sites shows that the instrumentation and control and electrical equipment is more prone to human error caused failure events than the other maintenance and that plant modifications and also predetermined preventive maintenance are significant sources of common cause failures. Most errors stem from the refuelling and maintenance outage period at the both sites, and less than half of the dependent errors were identified during the same outage. The dependent human errors originating from modifications could be reduced by a more tailored specification and coverage of their start-up testing programs. Improvements could also be achieved by a more case specific planning of the installation inspection and functional testing of complicated maintenance works or work objects of higher plant safety and availability importance. A better use and analysis of condition monitoring information for maintenance steering could also help. The feedback from discussions of the analysis results with plant experts and professionals is still crucial in developing the final conclusions and recommendations that meet the specific development needs at the plants. (au)

  13. An Analysis and Quantification Method of Human Errors of Soft Controls in Advanced MCRs

    International Nuclear Information System (INIS)

    Lee, Seung Jun; Kim, Jae Whan; Jang, Seung Cheol

    2011-01-01

    In this work, a method was proposed for quantifying human errors that may occur during operation executions using soft control. Soft controls of advanced main control rooms (MCRs) have totally different features from conventional controls, and thus they may have different human error modes and occurrence probabilities. It is important to define the human error modes and to quantify the error probability for evaluating the reliability of the system and preventing errors. This work suggests a modified K-HRA method for quantifying error probability

  14. Towards eliminating systematic errors caused by the experimental conditions in Biochemical Methane Potential (BMP) tests

    International Nuclear Information System (INIS)

    Strömberg, Sten; Nistor, Mihaela; Liu, Jing

    2014-01-01

    Highlights: • The evaluated factors introduce significant systematic errors (10–38%) in BMP tests. • Ambient temperature (T) has the most substantial impact (∼10%) at low altitude. • Ambient pressure (p) has the most substantial impact (∼68%) at high altitude. • Continuous monitoring of T and p is not necessary for kinetic calculations. - Abstract: The Biochemical Methane Potential (BMP) test is increasingly recognised as a tool for selecting and pricing biomass material for production of biogas. However, the results for the same substrate often differ between laboratories and much work to standardise such tests is still needed. In the current study, the effects from four environmental factors (i.e. ambient temperature and pressure, water vapour content and initial gas composition of the reactor headspace) on the degradation kinetics and the determined methane potential were evaluated with a 2 4 full factorial design. Four substrates, with different biodegradation profiles, were investigated and the ambient temperature was found to be the most significant contributor to errors in the methane potential. Concerning the kinetics of the process, the environmental factors’ impact on the calculated rate constants was negligible. The impact of the environmental factors on the kinetic parameters and methane potential from performing a BMP test at different geographical locations around the world was simulated by adjusting the data according to the ambient temperature and pressure of some chosen model sites. The largest effect on the methane potential was registered from tests performed at high altitudes due to a low ambient pressure. The results from this study illustrate the importance of considering the environmental factors’ influence on volumetric gas measurement in BMP tests. This is essential to achieve trustworthy and standardised results that can be used by researchers and end users from all over the world

  15. Towards eliminating systematic errors caused by the experimental conditions in Biochemical Methane Potential (BMP) tests

    Energy Technology Data Exchange (ETDEWEB)

    Strömberg, Sten, E-mail: sten.stromberg@biotek.lu.se [Department of Biotechnology, Lund University, Getingevägen 60, 221 00 Lund (Sweden); Nistor, Mihaela, E-mail: mn@bioprocesscontrol.com [Bioprocess Control, Scheelevägen 22, 223 63 Lund (Sweden); Liu, Jing, E-mail: jing.liu@biotek.lu.se [Department of Biotechnology, Lund University, Getingevägen 60, 221 00 Lund (Sweden); Bioprocess Control, Scheelevägen 22, 223 63 Lund (Sweden)

    2014-11-15

    Highlights: • The evaluated factors introduce significant systematic errors (10–38%) in BMP tests. • Ambient temperature (T) has the most substantial impact (∼10%) at low altitude. • Ambient pressure (p) has the most substantial impact (∼68%) at high altitude. • Continuous monitoring of T and p is not necessary for kinetic calculations. - Abstract: The Biochemical Methane Potential (BMP) test is increasingly recognised as a tool for selecting and pricing biomass material for production of biogas. However, the results for the same substrate often differ between laboratories and much work to standardise such tests is still needed. In the current study, the effects from four environmental factors (i.e. ambient temperature and pressure, water vapour content and initial gas composition of the reactor headspace) on the degradation kinetics and the determined methane potential were evaluated with a 2{sup 4} full factorial design. Four substrates, with different biodegradation profiles, were investigated and the ambient temperature was found to be the most significant contributor to errors in the methane potential. Concerning the kinetics of the process, the environmental factors’ impact on the calculated rate constants was negligible. The impact of the environmental factors on the kinetic parameters and methane potential from performing a BMP test at different geographical locations around the world was simulated by adjusting the data according to the ambient temperature and pressure of some chosen model sites. The largest effect on the methane potential was registered from tests performed at high altitudes due to a low ambient pressure. The results from this study illustrate the importance of considering the environmental factors’ influence on volumetric gas measurement in BMP tests. This is essential to achieve trustworthy and standardised results that can be used by researchers and end users from all over the world.

  16. Systematic instrumental errors between oxygen saturation analysers in fetal blood during deep hypoxemia.

    Science.gov (United States)

    Porath, M; Sinha, P; Dudenhausen, J W; Luttkus, A K

    2001-05-01

    During a study of artificially produced deep hypoxemia in fetal cord blood, systematic errors of three different oxygen saturation analysers were evaluated against a reference CO oximeter. The oxygen tensions (PO2) of 83 pre-heparinized fetal blood samples from umbilical veins were reduced by tonometry to 1.3 kPa (10 mm Hg) and 2.7 kPa (20 mm Hg). The oxygen saturation (SO2) was determined (n=1328) on a reference CO oximeter (ABL625, Radiometer Copenhagen) and on three tested instruments (two CO oximeters: Chiron865, Bayer Diagnostics; ABL700, Radiometer Copenhagen, and a portable blood gas analyser, i-STAT, Abbott). The CO oximeters measure the oxyhemoglobin and the reduced hemoglobin fractions by absorption spectrophotometry. The i-STAT system calculates the oxygen saturation from the measured pH, PO2, and PCO2. The measurements were performed in duplicate. Statistical evaluation focused on the differences between duplicate measurements and on systematic instrumental errors in oxygen saturation analysis compared to the reference CO oximeter. After tonometry, the median saturation dropped to 32.9% at a PO2=2.7 kPa (20 mm Hg), defined as saturation range 1, and to 10% SO2 at a PO2=1.3 kPa (10 mm Hg), defined as range 2. With decreasing SO2, all devices showed an increased difference between duplicate measurements. ABL625 and ABL700 showed the closest agreement between instruments (0.25% SO2 bias at saturation range 1 and -0.33% SO2 bias at saturation range 2). Chiron865 indicated higher saturation values than ABL 625 (3.07% SO2 bias at saturation range 1 and 2.28% SO2 bias at saturation range 2). Calculated saturation values (i-STAT) were more than 30% lower than the measured values of ABL625. The disagreement among CO oximeters was small but increasing under deep hypoxemia. Calculation found unacceptably low saturation.

  17. Avoiding a Systematic Error in Assessing Fat Graft Survival in the Breast with Repeated Magnetic Resonance Imaging

    DEFF Research Database (Denmark)

    Glovinski, Peter Viktor; Herly, Mikkel; Müller, Felix C

    2016-01-01

    Several techniques for measuring breast volume (BV) are based on examining the breast on magnetic resonance imaging. However, when techniques designed to measure total BV are used to quantify BV changes, for example, after fat grafting, a systematic error is introduced because BV changes lead to ...

  18. Convolution method and CTV-to-PTV margins for finite fractions and small systematic errors

    International Nuclear Information System (INIS)

    Gordon, J J; Siebers, J V

    2007-01-01

    The van Herk margin formula (VHMF) relies on the accuracy of the convolution method (CM) to determine clinical target volume (CTV) to planning target volume (PTV) margins. This work (1) evaluates the accuracy of the CM and VHMF as a function of the number of fractions N and other parameters, and (2) proposes an alternative margin algorithm which ensures target coverage for a wider range of parameter values. Dose coverage was evaluated for a spherical target with uniform margin, using the same simplified dose model and CTV coverage criterion as were used in development of the VHMF. Systematic and random setup errors were assumed to be normally distributed with standard deviations Σ and σ. For clinically relevant combinations of σ, Σ and N, margins were determined by requiring that 90% of treatment course simulations have a CTV minimum dose greater than or equal to the static PTV minimum dose. Simulation results were compared with the VHMF and the alternative margin algorithm. The CM and VHMF were found to be accurate for parameter values satisfying the approximate criterion: σ[1 - γN/25] 0.2, because they failed to account for the non-negligible dose variability associated with random setup errors. These criteria are applicable when σ ∼> σ P , where σ P = 0.32 cm is the standard deviation of the normal dose penumbra. (Qualitative behaviour of the CM and VHMF will remain the same, though the criteria might vary if σ P takes values other than 0.32 cm.) When σ P , dose variability due to random setup errors becomes negligible, and the CM and VHMF are valid regardless of the values of Σ and N. When σ ∼> σ P , consistent with the above criteria, it was found that the VHMF can underestimate margins for large σ, small Σ and small N. A potential consequence of this underestimate is that the CTV minimum dose can fall below its planned value in more than the prescribed 10% of treatments. The proposed alternative margin algorithm provides better margin

  19. Coping with medical error: a systematic review of papers to assess the effects of involvement in medical errors on healthcare professionals' psychological well-being.

    Science.gov (United States)

    Sirriyeh, Reema; Lawton, Rebecca; Gardner, Peter; Armitage, Gerry

    2010-12-01

    Previous research has established health professionals as secondary victims of medical error, with the identification of a range of emotional and psychological repercussions that may occur as a result of involvement in error.2 3 Due to the vast range of emotional and psychological outcomes, research to date has been inconsistent in the variables measured and tools used. Therefore, differing conclusions have been drawn as to the nature of the impact of error on professionals and the subsequent repercussions for their team, patients and healthcare institution. A systematic review was conducted. Data sources were identified using database searches, with additional reference and hand searching. Eligibility criteria were applied to all studies identified, resulting in a total of 24 included studies. Quality assessment was conducted with the included studies using a tool that was developed as part of this research, but due to the limited number and diverse nature of studies, no exclusions were made on this basis. Review findings suggest that there is consistent evidence for the widespread impact of medical error on health professionals. Psychological repercussions may include negative states such as shame, self-doubt, anxiety and guilt. Despite much attention devoted to the assessment of negative outcomes, the potential for positive outcomes resulting from error also became apparent, with increased assertiveness, confidence and improved colleague relationships reported. It is evident that involvement in a medical error can elicit a significant psychological response from the health professional involved. However, a lack of literature around coping and support, coupled with inconsistencies and weaknesses in methodology, may need be addressed in future work.

  20. The Thirty Gigahertz Instrument Receiver for the QUIJOTE Experiment: Preliminary Polarization Measurements and Systematic-Error Analysis

    Directory of Open Access Journals (Sweden)

    Francisco J. Casas

    2015-08-01

    Full Text Available This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process.

  1. A permutation test to analyse systematic bias and random measurement errors of medical devices via boosting location and scale models.

    Science.gov (United States)

    Mayr, Andreas; Schmid, Matthias; Pfahlberg, Annette; Uter, Wolfgang; Gefeller, Olaf

    2017-06-01

    Measurement errors of medico-technical devices can be separated into systematic bias and random error. We propose a new method to address both simultaneously via generalized additive models for location, scale and shape (GAMLSS) in combination with permutation tests. More precisely, we extend a recently proposed boosting algorithm for GAMLSS to provide a test procedure to analyse potential device effects on the measurements. We carried out a large-scale simulation study to provide empirical evidence that our method is able to identify possible sources of systematic bias as well as random error under different conditions. Finally, we apply our approach to compare measurements of skin pigmentation from two different devices in an epidemiological study.

  2. Modeling systematic errors: polychromatic sources of Beer-Lambert deviations in HPLC/UV and nonchromatographic spectrophotometric assays.

    Science.gov (United States)

    Galli, C

    2001-07-01

    It is well established that the use of polychromatic radiation in spectrophotometric assays leads to excursions from the Beer-Lambert limit. This Note models the resulting systematic error as a function of assay spectral width, slope of molecular extinction coefficient, and analyte concentration. The theoretical calculations are compared with recent experimental results; a parameter is introduced which can be used to estimate the magnitude of the systematic error in both chromatographic and nonchromatographic spectrophotometric assays. It is important to realize that the polychromatic radiation employed in common laboratory equipment can yield assay errors up to approximately 4%, even at absorption levels generally considered 'safe' (i.e. absorption <1). Thus careful consideration of instrumental spectral width, analyte concentration, and slope of molecular extinction coefficient is required to ensure robust analytical methods.

  3. Measuring nuclear-spin-dependent parity violation with molecules: Experimental methods and analysis of systematic errors

    Science.gov (United States)

    Altuntaş, Emine; Ammon, Jeffrey; Cahn, Sidney B.; DeMille, David

    2018-04-01

    Nuclear-spin-dependent parity violation (NSD-PV) effects in atoms and molecules arise from Z0 boson exchange between electrons and the nucleus and from the magnetic interaction between electrons and the parity-violating nuclear anapole moment. It has been proposed to study NSD-PV effects using an enhancement of the observable effect in diatomic molecules [D. DeMille et al., Phys. Rev. Lett. 100, 023003 (2008), 10.1103/PhysRevLett.100.023003]. Here we demonstrate highly sensitive measurements of this type, using the test system 138Ba19F. We show that systematic errors associated with our technique can be suppressed to at least the level of the present statistical sensitivity. With ˜170 h of data, we measure the matrix element W of the NSD-PV interaction with uncertainty δ W /(2 π )<0.7 Hz for each of two configurations where W must have different signs. This sensitivity would be sufficient to measure NSD-PV effects of the size anticipated across a wide range of nuclei.

  4. Measurement of Systematic Error Effects for a Sensitive Storage Ring EDM Polarimeter

    Science.gov (United States)

    Imig, Astrid; Stephenson, Edward

    2009-10-01

    The Storage Ring EDM Collaboration was using the Cooler Synchrotron (COSY) and the EDDA detector at the Forschungszentrum J"ulich to explore systematic errors in very sensitive storage-ring polarization measurements. Polarized deuterons of 235 MeV were used. The analyzer target was a block of 17 mm thick carbon placed close to the beam so that white noise applied to upstream electrostatic plates increases the vertical phase space of the beam, allowing deuterons to strike the front face of the block. For a detector acceptance that covers laboratory angles larger than 9 ^o, the efficiency for particles to scatter into the polarimeter detectors was about 0.1% (all directions) and the vector analyzing power was about 0.2. Measurements were made of the sensitivity of the polarization measurement to beam position and angle. Both vector and tensor asymmetries were measured using beams with both vector and tensor polarization. Effects were seen that depend upon both the beam geometry and the data rate in the detectors.

  5. In-Situ Systematic Error Correction for Digital Volume Correlation Using a Reference Sample

    KAUST Repository

    Wang, B.

    2017-11-27

    The self-heating effect of a laboratory X-ray computed tomography (CT) scanner causes slight change in its imaging geometry, which induces translation and dilatation (i.e., artificial displacement and strain) in reconstructed volume images recorded at different times. To realize high-accuracy internal full-field deformation measurements using digital volume correlation (DVC), these artificial displacements and strains associated with unstable CT imaging must be eliminated. In this work, an effective and easily implemented reference sample compensation (RSC) method is proposed for in-situ systematic error correction in DVC. The proposed method utilizes a stationary reference sample, which is placed beside the test sample to record the artificial displacement fields caused by the self-heating effect of CT scanners. The detected displacement fields are then fitted by a parametric polynomial model, which is used to remove the unwanted artificial deformations in the test sample. Rescan tests of a stationary sample and real uniaxial compression tests performed on copper foam specimens demonstrate the accuracy, efficacy, and practicality of the presented RSC method.

  6. In-Situ Systematic Error Correction for Digital Volume Correlation Using a Reference Sample

    KAUST Repository

    Wang, B.; Pan, B.; Lubineau, Gilles

    2017-01-01

    The self-heating effect of a laboratory X-ray computed tomography (CT) scanner causes slight change in its imaging geometry, which induces translation and dilatation (i.e., artificial displacement and strain) in reconstructed volume images recorded at different times. To realize high-accuracy internal full-field deformation measurements using digital volume correlation (DVC), these artificial displacements and strains associated with unstable CT imaging must be eliminated. In this work, an effective and easily implemented reference sample compensation (RSC) method is proposed for in-situ systematic error correction in DVC. The proposed method utilizes a stationary reference sample, which is placed beside the test sample to record the artificial displacement fields caused by the self-heating effect of CT scanners. The detected displacement fields are then fitted by a parametric polynomial model, which is used to remove the unwanted artificial deformations in the test sample. Rescan tests of a stationary sample and real uniaxial compression tests performed on copper foam specimens demonstrate the accuracy, efficacy, and practicality of the presented RSC method.

  7. ASSESSMENT OF SYSTEMATIC CHROMATIC ERRORS THAT IMPACT SUB-1% PHOTOMETRIC PRECISION IN LARGE-AREA SKY SURVEYS

    Energy Technology Data Exchange (ETDEWEB)

    Li, T. S.; DePoy, D. L.; Marshall, J. L.; Boada, S.; Mondrik, N.; Nagasawa, D. [George P. and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy, and Department of Physics and Astronomy, Texas A and M University, College Station, TX 77843 (United States); Tucker, D.; Annis, J.; Finley, D. A.; Kent, S.; Lin, H.; Marriner, J.; Wester, W. [Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, IL 60510 (United States); Kessler, R.; Scolnic, D. [Kavli Institute for Cosmological Physics, University of Chicago, Chicago, IL 60637 (United States); Bernstein, G. M. [Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA 19104 (United States); Burke, D. L.; Rykoff, E. S. [SLAC National Accelerator Laboratory, Menlo Park, CA 94025 (United States); James, D. J.; Walker, A. R. [Cerro Tololo Inter-American Observatory, National Optical Astronomy Observatory, Casilla 603, La Serena (Chile); Collaboration: DES Collaboration; and others

    2016-06-01

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey’s stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. The residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for

  8. Systematic control of large computer programs

    International Nuclear Information System (INIS)

    Goedbloed, J.P.; Klieb, L.

    1986-07-01

    A package of CCL, UPDATE, and FORTRAN procedures is described which facilitates the systematic control and development of large scientific computer programs. The package provides a general tool box for this purpose which contains many conveniences for the systematic administration of files, editing, reformating of line printer output files, etc. In addition, a small number of procedures is devoted to the problem of structured development of a large computer program which is used by a group of scientists. The essence of the method is contained in three procedures N, R, and X for the creation of a new UPDATE program library, its revision, and execution, resp., and a procedure REVISE which provides a joint editor - UPDATE session which combines the advantages of the two systems, viz. speed and rigor. (Auth.)

  9. Increased errors and decreased performance at night: A systematic review of the evidence concerning shift work and quality.

    Science.gov (United States)

    de Cordova, Pamela B; Bradford, Michelle A; Stone, Patricia W

    2016-02-15

    Shift workers have worse health outcomes than employees who work standard business hours. However, it is unclear how this poorer health shift may be related to employee work productivity. The purpose of this systematic review is to assess the relationship between shift work and errors and performance. Searches of MEDLINE/PubMed, EBSCOhost, and CINAHL were conducted to identify articles that examined the relationship between shift work, errors, quality, productivity, and performance. All articles were assessed for study quality. A total of 435 abstracts were screened with 13 meeting inclusion criteria. Eight studies were rated to be of strong, methodological quality. Nine studies demonstrated a positive relationship that night shift workers committed more errors and had decreased performance. Night shift workers have worse health that may contribute to errors and decreased performance in the workplace.

  10. Mapping the absolute magnetic field and evaluating the quadratic Zeeman-effect-induced systematic error in an atom interferometer gravimeter

    Science.gov (United States)

    Hu, Qing-Qing; Freier, Christian; Leykauf, Bastian; Schkolnik, Vladimir; Yang, Jun; Krutzik, Markus; Peters, Achim

    2017-09-01

    Precisely evaluating the systematic error induced by the quadratic Zeeman effect is important for developing atom interferometer gravimeters aiming at an accuracy in the μ Gal regime (1 μ Gal =10-8m /s2 ≈10-9g ). This paper reports on the experimental investigation of Raman spectroscopy-based magnetic field measurements and the evaluation of the systematic error in the gravimetric atom interferometer (GAIN) due to quadratic Zeeman effect. We discuss Raman duration and frequency step-size-dependent magnetic field measurement uncertainty, present vector light shift and tensor light shift induced magnetic field measurement offset, and map the absolute magnetic field inside the interferometer chamber of GAIN with an uncertainty of 0.72 nT and a spatial resolution of 12.8 mm. We evaluate the quadratic Zeeman-effect-induced gravity measurement error in GAIN as 2.04 μ Gal . The methods shown in this paper are important for precisely mapping the absolute magnetic field in vacuum and reducing the quadratic Zeeman-effect-induced systematic error in Raman transition-based precision measurements, such as atomic interferometer gravimeters.

  11. How are medication errors defined? A systematic literature review of definitions and characteristics

    DEFF Research Database (Denmark)

    Lisby, Marianne; Nielsen, L P; Brock, Birgitte

    2010-01-01

    Multiplicity in terminology has been suggested as a possible explanation for the variation in the prevalence of medication errors. So far, few empirical studies have challenged this assertion. The objective of this review was, therefore, to describe the extent and characteristics of medication er...... error definitions in hospitals and to consider the consequences for measuring the prevalence of medication errors....

  12. Combined influence of CT random noise and HU-RSP calibration curve nonlinearities on proton range systematic errors

    Science.gov (United States)

    Brousmiche, S.; Souris, K.; Orban de Xivry, J.; Lee, J. A.; Macq, B.; Seco, J.

    2017-11-01

    Proton range random and systematic uncertainties are the major factors undermining the advantages of proton therapy, namely, a sharp dose falloff and a better dose conformality for lower doses in normal tissues. The influence of CT artifacts such as beam hardening or scatter can easily be understood and estimated due to their large-scale effects on the CT image, like cupping and streaks. In comparison, the effects of weakly-correlated stochastic noise are more insidious and less attention is drawn on them partly due to the common belief that they only contribute to proton range uncertainties and not to systematic errors thanks to some averaging effects. A new source of systematic errors on the range and relative stopping powers (RSP) has been highlighted and proved not to be negligible compared to the 3.5% uncertainty reference value used for safety margin design. Hence, we demonstrate that the angular points in the HU-to-RSP calibration curve are an intrinsic source of proton range systematic error for typical levels of zero-mean stochastic CT noise. Systematic errors on RSP of up to 1% have been computed for these levels. We also show that the range uncertainty does not generally vary linearly with the noise standard deviation. We define a noise-dependent effective calibration curve that better describes, for a given material, the RSP value that is actually used. The statistics of the RSP and the range continuous slowing down approximation (CSDA) have been analytically derived for the general case of a calibration curve obtained by the stoichiometric calibration procedure. These models have been validated against actual CSDA simulations for homogeneous and heterogeneous synthetical objects as well as on actual patient CTs for prostate and head-and-neck treatment planning situations.

  13. Scale interactions on diurnal toseasonal timescales and their relevanceto model systematic errors

    Directory of Open Access Journals (Sweden)

    G. Yang

    2003-06-01

    Full Text Available Examples of current research into systematic errors in climate models are used to demonstrate the importance of scale interactions on diurnal,intraseasonal and seasonal timescales for the mean and variability of the tropical climate system. It has enabled some conclusions to be drawn about possible processes that may need to be represented, and some recommendations to be made regarding model improvements. It has been shown that the Maritime Continent heat source is a major driver of the global circulation but yet is poorly represented in GCMs. A new climatology of the diurnal cycle has been used to provide compelling evidence of important land-sea breeze and gravity wave effects, which may play a crucial role in the heat and moisture budget of this key region for the tropical and global circulation. The role of the diurnal cycle has also been emphasized for intraseasonal variability associated with the Madden Julian Oscillation (MJO. It is suggested that the diurnal cycle in Sea Surface Temperature (SST during the suppressed phase of the MJO leads to a triggering of cumulus congestus clouds, which serve to moisten the free troposphere and hence precondition the atmosphere for the next active phase. It has been further shown that coupling between the ocean and atmosphere on intraseasonal timescales leads to a more realistic simulation of the MJO. These results stress the need for models to be able to simulate firstly, the observed tri-modal distribution of convection, and secondly, the coupling between the ocean and atmosphere on diurnal to intraseasonal timescales. It is argued, however, that the current representation of the ocean mixed layer in coupled models is not adequate to represent the complex structure of the observed mixed layer, in particular the formation of salinity barrier layers which can potentially provide much stronger local coupling between the atmosphere and ocean on diurnal to intraseasonal timescales.

  14. Strategies to reduce the systematic error due to tumor and rectum motion in radiotherapy of prostate cancer

    International Nuclear Information System (INIS)

    Hoogeman, Mischa S.; Herk, Marcel van; Bois, Josien de; Lebesque, Joos V.

    2005-01-01

    Background and purpose: The goal of this work is to develop and evaluate strategies to reduce the uncertainty in the prostate position and rectum shape that arises in the preparation stage of the radiation treatment of prostate cancer. Patients and methods: Nineteen prostate cancer patients, who were treated with 3-dimensional conformal radiotherapy, received each a planning CT scan and 8-13 repeat CT scans during the treatment period. We quantified prostate motion relative to the pelvic bone by first matching the repeat CT scans on the planning CT scan using the bony anatomy. Subsequently, each contoured prostate, including seminal vesicles, was matched on the prostate in the planning CT scan to obtain the translations and rotations. The variation in prostate position was determined in terms of the systematic, random and group mean error. We tested the performance of two correction strategies to reduce the systematic error due to prostate motion. The first strategy, the pre-treatment strategy, used only the initial rectum volume in the planning CT scan to adjust the angle of the prostate with respect to the left-right (LR) axis and the shape and position of the rectum. The second strategy, the adaptive strategy, used the data of repeat CT scans to improve the estimate of the prostate position and rectum shape during the treatment. Results: The largest component of prostate motion was a rotation around the LR axis. The systematic error (1 SD) was 5.1 deg and the random error was 3.6 deg (1 SD). The average LR-axis rotation between the planning and the repeat CT scans correlated significantly with the rectum volume in the planning CT scan (r=0.86, P<0.0001). Correction of the rotational position on the basis of the planning rectum volume alone reduced the systematic error by 28%. A correction, based on the data of the planning CT scan and 4 repeat CT scans reduced the systematic error over the complete treatment period by a factor of 2. When the correction was

  15. Post-error action control is neurobehaviorally modulated under conditions of constant speeded response.

    Science.gov (United States)

    Soshi, Takahiro; Ando, Kumiko; Noda, Takamasa; Nakazawa, Kanako; Tsumura, Hideki; Okada, Takayuki

    2014-01-01

    Post-error slowing (PES) is an error recovery strategy that contributes to action control, and occurs after errors in order to prevent future behavioral flaws. Error recovery often malfunctions in clinical populations, but the relationship between behavioral traits and recovery from error is unclear in healthy populations. The present study investigated the relationship between impulsivity and error recovery by simulating a speeded response situation using a Go/No-go paradigm that forced the participants to constantly make accelerated responses prior to stimuli disappearance (stimulus duration: 250 ms). Neural correlates of post-error processing were examined using event-related potentials (ERPs). Impulsivity traits were measured with self-report questionnaires (BIS-11, BIS/BAS). Behavioral results demonstrated that the commission error for No-go trials was 15%, but PES did not take place immediately. Delayed PES was negatively correlated with error rates and impulsivity traits, showing that response slowing was associated with reduced error rates and changed with impulsivity. Response-locked error ERPs were clearly observed for the error trials. Contrary to previous studies, error ERPs were not significantly related to PES. Stimulus-locked N2 was negatively correlated with PES and positively correlated with impulsivity traits at the second post-error Go trial: larger N2 activity was associated with greater PES and less impulsivity. In summary, under constant speeded conditions, error monitoring was dissociated from post-error action control, and PES did not occur quickly. Furthermore, PES and its neural correlate (N2) were modulated by impulsivity traits. These findings suggest that there may be clinical and practical efficacy of maintaining cognitive control of actions during error recovery under common daily environments that frequently evoke impulsive behaviors.

  16. Methods, analysis, and the treatment of systematic errors for the electron electric dipole moment search in thorium monoxide

    Science.gov (United States)

    Baron, J.; Campbell, W. C.; DeMille, D.; Doyle, J. M.; Gabrielse, G.; Gurevich, Y. V.; Hess, P. W.; Hutzler, N. R.; Kirilov, E.; Kozyryev, I.; O'Leary, B. R.; Panda, C. D.; Parsons, M. F.; Spaun, B.; Vutha, A. C.; West, A. D.; West, E. P.; ACME Collaboration

    2017-07-01

    We recently set a new limit on the electric dipole moment of the electron (eEDM) (J Baron et al and ACME collaboration 2014 Science 343 269-272), which represented an order-of-magnitude improvement on the previous limit and placed more stringent constraints on many charge-parity-violating extensions to the standard model. In this paper we discuss the measurement in detail. The experimental method and associated apparatus are described, together with the techniques used to isolate the eEDM signal. In particular, we detail the way experimental switches were used to suppress effects that can mimic the signal of interest. The methods used to search for systematic errors, and models explaining observed systematic errors, are also described. We briefly discuss possible improvements to the experiment.

  17. Systematic errors in the readings of track etch neutron dosemeters caused by the energy dependence of response

    International Nuclear Information System (INIS)

    Tanner, R.J.; Thomas, D.J.; Bartlett, D.T.; Horwood, N.

    1999-01-01

    A study has been performed to assess the extent to which variations in the energy dependence of response of neutron personal dosemeters can cause systematic errors in readings obtained in workplace fields. This involved a detailed determination of the response functions of personal dosemeters used in the UK. These response functions were folded with workplace spectra to ascertain the under- or over-response in workplace fields

  18. Systematic errors in the readings of track etch neutron dosemeters caused by the energy dependence of response

    CERN Document Server

    Tanner, R J; Bartlett, D T; Horwood, N

    1999-01-01

    A study has been performed to assess the extent to which variations in the energy dependence of response of neutron personal dosemeters can cause systematic errors in readings obtained in workplace fields. This involved a detailed determination of the response functions of personal dosemeters used in the UK. These response functions were folded with workplace spectra to ascertain the under- or over-response in workplace fields.

  19. Context Specificity of Post-Error and Post-Conflict Cognitive Control Adjustments

    Science.gov (United States)

    Forster, Sarah E.; Cho, Raymond Y.

    2014-01-01

    There has been accumulating evidence that cognitive control can be adaptively regulated by monitoring for processing conflict as an index of online control demands. However, it is not yet known whether top-down control mechanisms respond to processing conflict in a manner specific to the operative task context or confer a more generalized benefit. While previous studies have examined the taskset-specificity of conflict adaptation effects, yielding inconsistent results, control-related performance adjustments following errors have been largely overlooked. This gap in the literature underscores recent debate as to whether post-error performance represents a strategic, control-mediated mechanism or a nonstrategic consequence of attentional orienting. In the present study, evidence of generalized control following both high conflict correct trials and errors was explored in a task-switching paradigm. Conflict adaptation effects were not found to generalize across tasksets, despite a shared response set. In contrast, post-error slowing effects were found to extend to the inactive taskset and were predictive of enhanced post-error accuracy. In addition, post-error performance adjustments were found to persist for several trials and across multiple task switches, a finding inconsistent with attentional orienting accounts of post-error slowing. These findings indicate that error-related control adjustments confer a generalized performance benefit and suggest dissociable mechanisms of post-conflict and post-error control. PMID:24603900

  20. Galaxy Cluster Shapes and Systematic Errors in H_0 as Determined by the Sunyaev-Zel'dovich Effect

    Science.gov (United States)

    Sulkanen, Martin E.; Patel, Sandeep K.

    1998-01-01

    Imaging of the Sunyaev-Zeldovich (SZ) effect in galaxy clusters combined with cluster plasma x-ray diagnostics promises to measure the cosmic distance scale to high accuracy. However, projecting the inverse-Compton scattering and x-ray emission along the cluster line-of-sight will introduce systematic error's in the Hubble constant, H_0, because the true shape of the cluster is not known. In this paper we present a study of the systematic errors in the value of H_0, as determined by the x-ray and SZ properties of theoretical samples of triaxial isothermal "beta-model" clusters, caused by projection effects and observer orientation relative to the model clusters' principal axes. We calculate three estimates for H_0 for each cluster, based on their large and small apparent angular core radii, and their arithmetic mean. We average the estimates for H_0 for a sample of 25 clusters and find that the estimates have limited systematic error: the 99.7% confidence intervals for the mean estimated H_0 analyzing the clusters using either their large or mean angular core r;dius are within 14% of the "true" (assumed) value of H_0 (and enclose it), for a triaxial beta model cluster sample possessing a distribution of apparent x-ray cluster ellipticities consistent with that of observed x-ray clusters.

  1. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    of generic run-time error types, design of methods of observing application software behaviorduring execution and design of methods of evaluating run time constraints. In the definition of error types it is attempted to cover all relevant aspects of the application softwaree behavior. Methods of observation......In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...

  2. Quantitative estimation of the human error probability during soft control operations

    International Nuclear Information System (INIS)

    Lee, Seung Jun; Kim, Jaewhan; Jung, Wondea

    2013-01-01

    Highlights: ► An HRA method to evaluate execution HEP for soft control operations was proposed. ► The soft control tasks were analyzed and design-related influencing factors were identified. ► An application to evaluate the effects of soft controls was performed. - Abstract: In this work, a method was proposed for quantifying human errors that can occur during operation executions using soft controls. Soft controls of advanced main control rooms have totally different features from conventional controls, and thus they may have different human error modes and occurrence probabilities. It is important to identify the human error modes and quantify the error probability for evaluating the reliability of the system and preventing errors. This work suggests an evaluation framework for quantifying the execution error probability using soft controls. In the application result, it was observed that the human error probabilities of soft controls showed both positive and negative results compared to the conventional controls according to the design quality of advanced main control rooms

  3. Using Analysis Increments (AI) to Estimate and Correct Systematic Errors in the Global Forecast System (GFS) Online

    Science.gov (United States)

    Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.

    2017-12-01

    Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub

  4. Residual-based Methods for Controlling Discretization Error in CFD

    Science.gov (United States)

    2015-08-24

    ccjccjccj iVi Jwxf V dVxf V 1 ,,, )(det)( 1)(1   . (25) where J is the Jacobian of the coordinate transformation and the weights can be found from...179. Layton, W., Lee , H.K., and Peterson, J. (2002). “A Defect-Correction Method for the Incompressible Navier-Stokes Equations,” Applied Mathematics...and Computation, Vol. 129, pp. 1-19. Lee , D. and Tsuei, Y.M. (1992). “A Formula for Estimation of Truncation Errors of Convective Terms in a

  5. Dosimetric impact of systematic MLC positional errors on step and shoot IMRT for prostate cancer: a planning study

    International Nuclear Information System (INIS)

    Ung, N.M.; Wee, L.; Harper, C.S.

    2010-01-01

    Full text: The positional accuracy of multi leaf collimators (MLC) is crucial in ensuring precise delivery of intensity-modulated radiotherapy (IMRT). The aim of this planning study was to investigate the dosimetric impact of systematic MLC errors on step and shoot IMRT of prostate cancer. Twelve MLC leaf banks perturbations were introduced to six prostate IMRT treatment plans to simulate MLC systematic errors. Dose volume histograms (OYHs) were generated for the extraction of dose endpoint parameters. Plans were evaluated in terms of changes to the defined endpoint dose parameters, conformity index (CI) and healthy tissue avoidance (HTA) to planning target volume (PTY), rectum and bladder. Negative perturbations of MLC had been found to produce greater changes to endpoint dose parameters than positive perturbations of MLC (p < 0.05). Negative and positive synchronized MLC perturbations of I mm resulted in median changes of -2.32 and 1.78%, respectively to 095% of PTY whereas asynchronized MLC perturbations of the same direction and magnitude resulted in median changes of 1.18 and 0.90%, respectively. Doses to rectum were generally more sensitive to systematic MLC errors compared to bladder. Synchronized MLC perturbations of I mm resulted in median changes of endpoint dose parameters to both rectum and bladder from about I to 3%. Maximum reduction of -4.44 and -7.29% were recorded for CI and HTA, respectively, due to synchronized MLC perturbation of I mm. In summary, MLC errors resulted in measurable amount of dose changes to PTY and surrounding critical structures in prostate LMRT. (author)

  6. Systematic errors in the tables of theoretical total internal conversion coefficients

    International Nuclear Information System (INIS)

    Dragoun, O.; Rysavy, M.

    1992-01-01

    Some of the total internal conversion coefficients presented in widely used tables of Rosel et al (1978 Atom. Data Nucl. Data Tables 21, 291) were found to be erroneous. The errors appear for some low transition energies, all multipolarities, and probably for all elements. The origin of the errors is explained. The subshell conversion coefficients of Rosel et al, where available, agree with our calculations. to within a few percent. (author)

  7. Disturbance Error Reduction in Multivariable Optimal Control Systems

    Directory of Open Access Journals (Sweden)

    Ole A. Solheim

    1983-01-01

    Full Text Available The paper deals with the design of optimal multivariable controllers, using a modified LQR approach. All controllers discussed contain proportional feedback and, in addition, there may be feedforward, integral action or state estimation.

  8. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...... of error detection methods includes a high level software specification. this has the purpose of illustrating that the designed can be used in practice....

  9. Assessing systematic errors in GOSAT CO2 retrievals by comparing assimilated fields to independent CO2 data

    Science.gov (United States)

    Baker, D. F.; Oda, T.; O'Dell, C.; Wunch, D.; Jacobson, A. R.; Yoshida, Y.; Partners, T.

    2012-12-01

    Measurements of column CO2 concentration from space are now being taken at a spatial and temporal density that permits regional CO2 sources and sinks to be estimated. Systematic errors in the satellite retrievals must be minimized for these estimates to be useful, however. CO2 retrievals from the TANSO instrument aboard the GOSAT satellite are compared to similar column retrievals from the Total Carbon Column Observing Network (TCCON) as the primary method of validation; while this is a powerful approach, it can only be done for overflights of 10-20 locations and has not, for example, permitted validation of GOSAT data over the oceans or deserts. Here we present a complementary approach that uses a global atmospheric transport model and flux inversion method to compare different types of CO2 measurements (GOSAT, TCCON, surface in situ, and aircraft) at different locations, at the cost of added transport error. The measurements from any single type of data are used in a variational carbon data assimilation method to optimize surface CO2 fluxes (with a CarbonTracker prior), then the corresponding optimized CO2 concentration fields are compared to those data types not inverted, using the appropriate vertical weighting. With this approach, we find that GOSAT column CO2 retrievals from the ACOS project (version 2.9 and 2.10) contain systematic errors that make the modeled fit to the independent data worse. However, we find that the differences between the GOSAT data and our prior model are correlated with certain physical variables (aerosol amount, surface albedo, correction to total column mass) that are likely driving errors in the retrievals, independent of CO2 concentration. If we correct the GOSAT data using a fit to these variables, then we find the GOSAT data to improve the fit to independent CO2 data, which suggests that the useful information in the measurements outweighs the negative impact of the remaining systematic errors. With this assurance, we compare

  10. Human error recovery failure probability when using soft controls in computerized control rooms

    International Nuclear Information System (INIS)

    Jang, Inseok; Kim, Ar Ryum; Seong, Poong Hyun; Jung, Wondea

    2014-01-01

    Many literatures categorized recovery process into three phases; detection of problem situation, explanation of problem causes or countermeasures against problem, and end of recovery. Although the focus of recovery promotion has been on categorizing recovery phases and modeling recovery process, research related to human recovery failure probabilities has not been perform actively. On the other hand, a few study regarding recovery failure probabilities were implemented empirically. Summarizing, researches that have performed so far have several problems in terms of use in human reliability analysis (HRA). By adopting new human-system interfaces that are based on computer-based technologies, the operation environment of MCRs in NPPs has changed from conventional MCRs to advanced MCRs. Because of the different interfaces between conventional and advanced MCRs, different recovery failure probabilities should be considered in the HRA for advanced MCRs. Therefore, this study carries out an empirical analysis of human error recovery probabilities under an advanced MCR mockup called compact nuclear simulator (CNS). The aim of this work is not only to compile a recovery failure probability database using the simulator for advanced MCRs but also to collect recovery failure probability according to defined human error modes to compare that which human error mode has highest recovery failure probability. The results show that recovery failure probability regarding wrong screen selection was lowest among human error modes, which means that most of human error related to wrong screen selection can be recovered. On the other hand, recovery failure probabilities of operation selection omission and delayed operation were 1.0. These results imply that once subject omitted one task in the procedure, they have difficulties finding and recovering their errors without supervisor's assistance. Also, wrong screen selection had an effect on delayed operation. That is, wrong screen

  11. An A Posteriori Error Estimate for Symplectic Euler Approximation of Optimal Control Problems

    KAUST Repository

    Karlsson, Peer Jesper

    2015-01-07

    This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns Symplectic Euler solutions of the Hamiltonian system connected with the optimal control problem. The error representation has a leading order term consisting of an error density that is computable from Symplectic Euler solutions. Under an assumption of the pathwise convergence of the approximate dual function as the maximum time step goes to zero, we prove that the remainder is of higher order than the leading error density part in the error representation. With the error representation, it is possible to perform adaptive time stepping. We apply an adaptive algorithm originally developed for ordinary differential equations.

  12. An Error Estimate for Symplectic Euler Approximation of Optimal Control Problems

    KAUST Repository

    Karlsson, Jesper; Larsson, Stig; Sandberg, Mattias; Szepessy, Anders; Tempone, Raul

    2015-01-01

    This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns symplectic Euler solutions of the Hamiltonian system connected with the optimal control problem. The error representation has a leading-order term consisting of an error density that is computable from symplectic Euler solutions. Under an assumption of the pathwise convergence of the approximate dual function as the maximum time step goes to zero, we prove that the remainder is of higher order than the leading-error density part in the error representation. With the error representation, it is possible to perform adaptive time stepping. We apply an adaptive algorithm originally developed for ordinary differential equations. The performance is illustrated by numerical tests.

  13. Continuous fractional-order Zero Phase Error Tracking Control.

    Science.gov (United States)

    Liu, Lu; Tian, Siyuan; Xue, Dingyu; Zhang, Tao; Chen, YangQuan

    2018-04-01

    A continuous time fractional-order feedforward control algorithm for tracking desired time varying input signals is proposed in this paper. The presented controller cancels the phase shift caused by the zeros and poles of controlled closed-loop fractional-order system, so it is called Fractional-Order Zero Phase Tracking Controller (FZPETC). The controlled systems are divided into two categories i.e. with and without non-cancellable (non-minimum-phase) zeros which stand in unstable region or on stability boundary. Each kinds of systems has a targeted FZPETC design control strategy. The improved tracking performance has been evaluated successfully by applying the proposed controller to three different kinds of fractional-order controlled systems. Besides, a modified quasi-perfect tracking scheme is presented for those systems which may not have available future tracking trajectory information or have problem in high frequency disturbance rejection if the perfect tracking algorithm is applied. A simulation comparison and a hardware-in-the-loop thermal peltier platform are shown to validate the practicality of the proposed quasi-perfect control algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Dosimetric impact of systematic MLC positional errors on step and shoot IMRT for prostate cancer: a planning study

    International Nuclear Information System (INIS)

    Ung, N.M.; Harper, C.S.; Wee, L.

    2011-01-01

    Full text: The positional accuracy of multileaf collimators (MLC) is crucial in ensuring precise delivery of intensity-modulated radiotherapy (IMRT). The aim of this planning study was to investigate the dosimetric impact of systematic MLC positional errors on step and shoot IMRT of prostate cancer. A total of 12 perturbations of MLC leaf banks were introduced to six prostate IMRT treatment plans to simulate MLC systematic positional errors. Dose volume histograms (DVHs) were generated for the extraction of dose endpoint parameters. Plans were evaluated in terms of changes to the defined endpoint dose parameters, conformity index (CI) and healthy tissue avoidance (HTA) to planning target volume (PTV), rectum and bladder. Negative perturbations of MLC had been found to produce greater changes to endpoint dose parameters than positive perturbations of MLC (p 9 5 of -1.2 and 0.9% respectively. Negative and positive synchronised MLC perturbations of I mm in one direction resulted in median changes in D 9 5 of -2.3 and 1.8% respectively. Doses to rectum were generally more sensitive to systematic MLC en-ors compared to bladder (p < 0.01). Negative and positive synchronised MLC perturbations of I mm in one direction resulted in median changes in endpoint dose parameters of rectum and bladder from 1.0 to 2.5%. Maximum reduction of -4.4 and -7.3% were recorded for conformity index (CI) and healthy tissue avoidance (HT A) respectively due to synchronised MLC perturbation of 1 mm. MLC errors resulted in dosimetric changes in IMRT plans for prostate. (author)

  15. The Curious Anomaly of Skewed Judgment Distributions and Systematic Error in the Wisdom of Crowds

    DEFF Research Database (Denmark)

    Nash, Ulrik William

    2014-01-01

    about true values, when neurons categorize cues better than chance, and when the particular true value is extreme compared to what is typical and anchored upon, then populations of judges form skewed judgment distributions with high probability. Moreover, the collective error made by these people can...... positively with collective error, thereby challenging what is commonly believed about how diversity and collective intelligence relate. Data from 3053 judgment surveys about US macroeconomic variables obtained from the Federal Reserve Bank of Philadelphia and the Wall Street Journal provide strong support...

  16. Towards a systematic assessment of errors in diffusion Monte Carlo calculations of semiconductors: Case study of zinc selenide and zinc oxide

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Jaehyung [Department of Mechanical Science and Engineering, 1206 W Green Street, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States); Wagner, Lucas K. [Department of Physics, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States); Ertekin, Elif, E-mail: ertekin@illinois.edu [Department of Mechanical Science and Engineering, 1206 W Green Street, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States); International Institute for Carbon Neutral Energy Research - WPI-I" 2CNER, Kyushu University, 744 Moto-oka, Nishi-ku, Fukuoka 819-0395 (Japan)

    2015-12-14

    The fixed node diffusion Monte Carlo (DMC) method has attracted interest in recent years as a way to calculate properties of solid materials with high accuracy. However, the framework for the calculation of properties such as total energies, atomization energies, and excited state energies is not yet fully established. Several outstanding questions remain as to the effect of pseudopotentials, the magnitude of the fixed node error, and the size of supercell finite size effects. Here, we consider in detail the semiconductors ZnSe and ZnO and carry out systematic studies to assess the magnitude of the energy differences arising from controlled and uncontrolled approximations in DMC. The former include time step errors and supercell finite size effects for ground and optically excited states, and the latter include pseudopotentials, the pseudopotential localization approximation, and the fixed node approximation. We find that for these compounds, the errors can be controlled to good precision using modern computational resources and that quantum Monte Carlo calculations using Dirac-Fock pseudopotentials can offer good estimates of both cohesive energy and the gap of these systems. We do however observe differences in calculated optical gaps that arise when different pseudopotentials are used.

  17. Global CO2 flux inversions from remote-sensing data with systematic errors using hierarchical statistical models

    Science.gov (United States)

    Zammit-Mangion, Andrew; Stavert, Ann; Rigby, Matthew; Ganesan, Anita; Rayner, Peter; Cressie, Noel

    2017-04-01

    The Orbiting Carbon Observatory-2 (OCO-2) satellite was launched on 2 July 2014, and it has been a source of atmospheric CO2 data since September 2014. The OCO-2 dataset contains a number of variables, but the one of most interest for flux inversion has been the column-averaged dry-air mole fraction (in units of ppm). These global level-2 data offer the possibility of inferring CO2 fluxes at Earth's surface and tracking those fluxes over time. However, as well as having a component of random error, the OCO-2 data have a component of systematic error that is dependent on the instrument's mode, namely land nadir, land glint, and ocean glint. Our statistical approach to CO2-flux inversion starts with constructing a statistical model for the random and systematic errors with parameters that can be estimated from the OCO-2 data and possibly in situ sources from flasks, towers, and the Total Column Carbon Observing Network (TCCON). Dimension reduction of the flux field is achieved through the use of physical basis functions, while temporal evolution of the flux is captured by modelling the basis-function coefficients as a vector autoregressive process. For computational efficiency, flux inversion uses only three months of sensitivities of mole fraction to changes in flux, computed using MOZART; any residual variation is captured through the modelling of a stochastic process that varies smoothly as a function of latitude. The second stage of our statistical approach is to simulate from the posterior distribution of the basis-function coefficients and all unknown parameters given the data using a fully Bayesian Markov chain Monte Carlo (MCMC) algorithm. Estimates and posterior variances of the flux field can then be obtained straightforwardly from this distribution. Our statistical approach is different than others, as it simultaneously makes inference (and quantifies uncertainty) on both the error components' parameters and the CO2 fluxes. We compare it to more classical

  18. Analysis technique for controlling system wavefront error with active/adaptive optics

    Science.gov (United States)

    Genberg, Victor L.; Michels, Gregory J.

    2017-08-01

    The ultimate goal of an active mirror system is to control system level wavefront error (WFE). In the past, the use of this technique was limited by the difficulty of obtaining a linear optics model. In this paper, an automated method for controlling system level WFE using a linear optics model is presented. An error estimate is included in the analysis output for both surface error disturbance fitting and actuator influence function fitting. To control adaptive optics, the technique has been extended to write system WFE in state space matrix form. The technique is demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.

  19. A systematic framework for Monte Carlo simulation of remote sensing errors map in carbon assessments

    Science.gov (United States)

    S. Healey; P. Patterson; S. Urbanski

    2014-01-01

    Remotely sensed observations can provide unique perspective on how management and natural disturbance affect carbon stocks in forests. However, integration of these observations into formal decision support will rely upon improved uncertainty accounting. Monte Carlo (MC) simulations offer a practical, empirical method of accounting for potential remote sensing errors...

  20. A Systematic Approach for Identifying Level-1 Error Covariance Structures in Latent Growth Modeling

    Science.gov (United States)

    Ding, Cherng G.; Jane, Ten-Der; Wu, Chiu-Hui; Lin, Hang-Rung; Shen, Chih-Kang

    2017-01-01

    It has been pointed out in the literature that misspecification of the level-1 error covariance structure in latent growth modeling (LGM) has detrimental impacts on the inferences about growth parameters. Since correct covariance structure is difficult to specify by theory, the identification needs to rely on a specification search, which,…

  1. Error-enhancing robot therapy to induce motor control improvement in childhood onset primary dystonia

    Directory of Open Access Journals (Sweden)

    Casellato Claudia

    2012-07-01

    Full Text Available Abstract Background Robot-generated deviating forces during multijoint reaching movements have been applied to investigate motor control and to tune neuromotor adaptation. Can the application of force to limbs improve motor learning? In this framework, the response to altered dynamic environments of children affected by primary dystonia has never been studied. Methods As preliminary pilot study, eleven children with primary dystonia and eleven age-matched healthy control subjects were asked to perform upper limb movements, triangle-reaching (three directions and circle-writing, using a haptic robot interacting with ad-hoc developed task-specific visual interfaces. Three dynamic conditions were provided, null additive external force (A, constant disturbing force (B and deactivation of the additive external force again (C. The path length for each trial was computed, from the recorded position data and interaction events. Results The results show that the disturbing force affects significantly the movement outcomes in healthy but not in dystonic subjects, already compromised in the reference condition: the external alteration uncalibrates the healthy sensorimotor system, while the dystonic one is already strongly uncalibrated. The lack of systematic compensation for perturbation effects during B condition is reflected into the absence of after-effects in C condition, which would be the evidence that CNS generates a prediction of the perturbing forces using an internal model of the environment. The most promising finding is that in dystonic population the altered dynamic exposure seems to induce a subsequent improvement, i.e. a beneficial after-effect in terms of optimal path control, compared with the correspondent reference movement outcome. Conclusions The short-time error-enhancing training in dystonia could represent an effective approach for motor performance improvement, since the exposure to controlled dynamic alterations induces a refining

  2. Subdivision Error Analysis and Compensation for Photoelectric Angle Encoder in a Telescope Control System

    Directory of Open Access Journals (Sweden)

    Yanrui Su

    2015-01-01

    Full Text Available As the position sensor, photoelectric angle encoder affects the accuracy and stability of telescope control system (TCS. A TCS-based subdivision error compensation method for encoder is proposed. Six types of subdivision error sources are extracted through mathematical expressions of subdivision signals first. Then the period length relationships between subdivision signals and subdivision errors are deduced. And the error compensation algorithm only utilizing the shaft position of TCS is put forward, along with two control models; Model I is that the algorithm applies only to the speed loop of TCS and Model II is applied to both speed loop and position loop. Combined with actual project, elevation jittering phenomenon of the telescope is discussed to decide the necessity of DC-type subdivision error compensation. Low-speed elevation performance before and after error compensation is compared to help decide that Model II is preferred. In contrast to original performance, the maximum position error of the elevation with DC subdivision error compensation is reduced by approximately 47.9% from 1.42″ to 0.74″. The elevation gets a huge decrease in jitters. This method can compensate the encoder subdivision errors effectively and improve the stability of TCS.

  3. Discrete-Time Stable Generalized Self-Learning Optimal Control With Approximation Errors.

    Science.gov (United States)

    Wei, Qinglai; Li, Benkai; Song, Ruizhuo

    2018-04-01

    In this paper, a generalized policy iteration (GPI) algorithm with approximation errors is developed for solving infinite horizon optimal control problems for nonlinear systems. The developed stable GPI algorithm provides a general structure of discrete-time iterative adaptive dynamic programming algorithms, by which most of the discrete-time reinforcement learning algorithms can be described using the GPI structure. It is for the first time that approximation errors are explicitly considered in the GPI algorithm. The properties of the stable GPI algorithm with approximation errors are analyzed. The admissibility of the approximate iterative control law can be guaranteed if the approximation errors satisfy the admissibility criteria. The convergence of the developed algorithm is established, which shows that the iterative value function is convergent to a finite neighborhood of the optimal performance index function, if the approximate errors satisfy the convergence criterion. Finally, numerical examples and comparisons are presented.

  4. An A Posteriori Error Estimate for Symplectic Euler Approximation of Optimal Control Problems

    KAUST Repository

    Karlsson, Peer Jesper; Larsson, Stig; Sandberg, Mattias; Szepessy, Anders; Tempone, Raul

    2015-01-01

    This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns Symplectic Euler solutions of the Hamiltonian system

  5. Modeling Systematic Error Effects for a Sensitive Storage Ring EDM Polarimeter

    Science.gov (United States)

    Stephenson, Edward; Imig, Astrid

    2009-10-01

    The Storage Ring EDM Collaboration has obtained a set of measurements detailing the sensitivity of a storage ring polarimeter for deuterons to small geometrical and rate changes. Various schemes, such as the calculation of the cross ratio [1], can cancel effects due to detector acceptance differences and luminosity differences for states of opposite polarization. Such schemes fail at second-order in the errors, becoming sensitive to geometrical changes, polarization magnitude differences between opposite polarization states, and changes to the detector response with changing data rates. An expansion of the polarimeter response in a Taylor series based on small errors about the polarimeter operating point can parametrize such effects, primarily in terms of the logarithmic derivatives of the cross section and analyzing power. A comparison will be made to measurements obtained with the EDDA detector at COSY-J"ulich. [4pt] [1] G.G. Ohlsen and P.W. Keaton, Jr., NIM 109, 41 (1973).

  6. Galaxy Cluster Shapes and Systematic Errors in the Hubble Constant as Determined by the Sunyaev-Zel'dovich Effect

    Science.gov (United States)

    Sulkanen, Martin E.; Joy, M. K.; Patel, S. K.

    1998-01-01

    Imaging of the Sunyaev-Zei'dovich (S-Z) effect in galaxy clusters combined with the cluster plasma x-ray diagnostics can measure the cosmic distance scale to high accuracy. However, projecting the inverse-Compton scattering and x-ray emission along the cluster line-of-sight will introduce systematic errors in the Hubble constant, H$-O$, because the true shape of the cluster is not known. This effect remains present for clusters that are otherwise chosen to avoid complications for the S-Z and x-ray analysis, such as plasma temperature variations, cluster substructure, or cluster dynamical evolution. In this paper we present a study of the systematic errors in the value of H$-0$, as determined by the x-ray and S-Z properties of a theoretical sample of triaxial isothermal 'beta-model' clusters, caused by projection effects and observer orientation relative to the model clusters' principal axes. The model clusters are not generated as ellipsoids of rotation, but have three independent 'core radii', as well as a random orientation to the plane of the sky.

  7. A systematic review of patient medication error on self-administering medication at home.

    Science.gov (United States)

    Mira, José Joaquín; Lorenzo, Susana; Guilabert, Mercedes; Navarro, Isabel; Pérez-Jover, Virtudes

    2015-06-01

    Medication errors have been analyzed as a health professionals' responsibility (due to mistakes in prescription, preparation or dispensing). However, sometimes, patients themselves (or their caregivers) make mistakes in the administration of the medication. The epidemiology of patient medication errors (PEs) has been scarcely reviewed in spite of its impact on people, on therapeutic effectiveness and on incremental cost for the health systems. This study reviews and describes the methodological approaches and results of published studies on the frequency, causes and consequences of medication errors committed by patients at home. A review of research articles published between 1990 and 2014 was carried out using MEDLINE, Web-of-Knowledge, Scopus, Tripdatabase and Index Medicus. The frequency of PE was situated between 19 and 59%. The elderly and the preschooler population constituted a higher number of mistakes than others. The most common were: incorrect dosage, forgetting, mixing up medications, failing to recall indications and taking out-of-date or inappropriately stored drugs. The majority of these mistakes have no negative consequences. Health literacy, information and communication and complexity of use of dispensing devices were identified as causes of PEs. Apps and other new technologies offer several opportunities for improving drug safety.

  8. Trellis and turbo coding iterative and graph-based error control coding

    CERN Document Server

    Schlegel, Christian B

    2015-01-01

    This new edition has been extensively revised to reflect the progress in error control coding over the past few years. Over 60% of the material has been completely reworked, and 30% of the material is original. Convolutional, turbo, and low density parity-check (LDPC) coding and polar codes in a unified framework. Advanced research-related developments such as spatial coupling. A focus on algorithmic and implementation aspects of error control coding.

  9. SCHEME (Soft Control Human error Evaluation MEthod) for advanced MCR HRA

    International Nuclear Information System (INIS)

    Jang, Inseok; Jung, Wondea; Seong, Poong Hyun

    2015-01-01

    The Technique for Human Error Rate Prediction (THERP), Korean Human Reliability Analysis (K-HRA), Human Error Assessment and Reduction Technique (HEART), A Technique for Human Event Analysis (ATHEANA), Cognitive Reliability and Error Analysis Method (CREAM), and Simplified Plant Analysis Risk Human Reliability Assessment (SPAR-H) in relation to NPP maintenance and operation. Most of these methods were developed considering the conventional type of Main Control Rooms (MCRs). They are still used for HRA in advanced MCRs even though the operating environment of advanced MCRs in NPPs has been considerably changed by the adoption of new human-system interfaces such as computer-based soft controls. Among the many features in advanced MCRs, soft controls are an important feature because the operation action in NPP advanced MCRs is performed by soft controls. Consequently, those conventional methods may not sufficiently consider the features of soft control execution human errors. To this end, a new framework of a HRA method for evaluating soft control execution human error is suggested by performing the soft control task analysis and the literature reviews regarding widely accepted human error taxonomies. In this study, the framework of a HRA method for evaluating soft control execution human error in advanced MCRs is developed. First, the factors which HRA method in advanced MCRs should encompass are derived based on the literature review, and soft control task analysis. Based on the derived factors, execution HRA framework in advanced MCRs is developed mainly focusing on the features of soft control. Moreover, since most current HRA database deal with operation in conventional type of MCRs and are not explicitly designed to deal with digital HSI, HRA database are developed under lab scale simulation

  10. Experiences of and support for nurses as second victims of adverse nursing errors: a qualitative systematic review.

    Science.gov (United States)

    Cabilan, C J; Kynoch, Kathryn

    2017-09-01

    Second victims are clinicians who have made adverse errors and feel traumatized by the experience. The current published literature on second victims is mainly representative of doctors, hence nurses' experiences are not fully depicted. This systematic review was necessary to understand the second victim experience for nurses, explore the support provided, and recommend appropriate support systems for nurses. To synthesize the best available evidence on nurses' experiences as second victims, and explore their experiences of the support they receive and the support they need. Participants were registered nurses who made adverse errors. The review included studies that described nurses' experiences as second victims and/or the support they received after making adverse errors. All studies conducted in any health care settings worldwide. The qualitative studies included were grounded theory, discourse analysis and phenomenology. A structured search strategy was used to locate all unpublished and published qualitative studies, but was limited to the English language, and published between 1980 and February 2017. The references of studies selected for eligibility screening were hand-searched for additional literature. Eligible studies were assessed by two independent reviewers for methodological quality using a standardized critical appraisal instrument from the Joanna Briggs Institute Qualitative Assessment and Review Instrument (JBI QARI). Themes and narrative statements were extracted from papers included in the review using the standardized data extraction tool from JBI QARI. Data synthesis was conducted using the Joanna Briggs Institute meta-aggregation approach. There were nine qualitative studies included in the review. The narratives of 284 nurses generated a total of 43 findings, which formed 15 categories based on similarity of meaning. Four synthesized findings were generated from the categories: (i) The error brings a considerable emotional burden to the

  11. Errors, lies and misunderstandings: Systematic review on behavioural decision making in projects

    DEFF Research Database (Denmark)

    Stingl, Verena; Geraldi, Joana

    2017-01-01

    limitations—errors), pluralist (on political behaviour—lies), and contextualist (on social and organizational sensemaking—misunderstandings). Our review suggests avenues for future research with a wider coverage of theories in cognitive and social psychology and critical and mindful integration of findings...... in projects and beyond. However, the literature is fragmented and draws only on a fraction of the recent, insightful, and relevant developments on behavioural decision making. This paper organizes current research in a conceptual framework rooted in three schools of thinking—reductionist (on cognitive...

  12. The application of SHERPA (Systematic Human Error Reduction and Prediction Approach) in the development of compensatory cognitive rehabilitation strategies for stroke patients with left and right brain damage.

    Science.gov (United States)

    Hughes, Charmayne M L; Baber, Chris; Bienkiewicz, Marta; Worthington, Andrew; Hazell, Alexa; Hermsdörfer, Joachim

    2015-01-01

    Approximately 33% of stroke patients have difficulty performing activities of daily living, often committing errors during the planning and execution of such activities. The objective of this study was to evaluate the ability of the human error identification (HEI) technique SHERPA (Systematic Human Error Reduction and Prediction Approach) to predict errors during the performance of daily activities in stroke patients with left and right hemisphere lesions. Using SHERPA we successfully predicted 36 of the 38 observed errors, with analysis indicating that the proportion of predicted and observed errors was similar for all sub-tasks and severity levels. HEI results were used to develop compensatory cognitive strategies that clinicians could employ to reduce or prevent errors from occurring. This study provides evidence for the reliability and validity of SHERPA in the design of cognitive rehabilitation strategies in stroke populations.

  13. The curious anomaly of skewed judgment distributions and systematic error in the wisdom of crowds.

    Directory of Open Access Journals (Sweden)

    Ulrik W Nash

    Full Text Available Judgment distributions are often skewed and we know little about why. This paper explains the phenomenon of skewed judgment distributions by introducing the augmented quincunx (AQ model of sequential and probabilistic cue categorization by neurons of judges. In the process of developing inferences about true values, when neurons categorize cues better than chance, and when the particular true value is extreme compared to what is typical and anchored upon, then populations of judges form skewed judgment distributions with high probability. Moreover, the collective error made by these people can be inferred from how skewed their judgment distributions are, and in what direction they tilt. This implies not just that judgment distributions are shaped by cues, but that judgment distributions are cues themselves for the wisdom of crowds. The AQ model also predicts that judgment variance correlates positively with collective error, thereby challenging what is commonly believed about how diversity and collective intelligence relate. Data from 3053 judgment surveys about US macroeconomic variables obtained from the Federal Reserve Bank of Philadelphia and the Wall Street Journal provide strong support, and implications are discussed with reference to three central ideas on collective intelligence, these being Galton's conjecture on the distribution of judgments, Muth's rational expectations hypothesis, and Page's diversity prediction theorem.

  14. Diagnostic and therapeutic errors in trigeminal autonomic cephalalgias and hemicrania continua: a systematic review

    Science.gov (United States)

    2013-01-01

    Trigeminal autonomic cephalalgias (TACs) and hemicrania continua (HC) are relatively rare but clinically rather well-defined primary headaches. Despite the existence of clear-cut diagnostic criteria (The International Classification of Headache Disorders, 2nd edition - ICHD-II) and several therapeutic guidelines, errors in workup and treatment of these conditions are frequent in clinical practice. We set out to review all available published data on mismanagement of TACs and HC patients in order to understand and avoid its causes. The search strategy identified 22 published studies. The most frequent errors described in the management of patients with TACs and HC are: referral to wrong type of specialist, diagnostic delay, misdiagnosis, and the use of treatments without overt indication. Migraine with and without aura, trigeminal neuralgia, sinus infection, dental pain and temporomandibular dysfunction are the disorders most frequently overdiagnosed. Even when the clinical picture is clear-cut, TACs and HC are frequently not recognized and/or mistaken for other disorders, not only by general physicians, dentists and ENT surgeons, but also by neurologists and headache specialists. This seems to be due to limited knowledge of the specific characteristics and variants of these disorders, and it results in the unnecessary prescription of ineffective and sometimes invasive treatments which may have negative consequences for patients. Greater knowledge of and education about these disorders, among both primary care physicians and headache specialists, might contribute to improving the quality of life of TACs and HC patients. PMID:23565739

  15. Artificial neural network implementation of a near-ideal error prediction controller

    Science.gov (United States)

    Mcvey, Eugene S.; Taylor, Lynore Denise

    1992-01-01

    A theory has been developed at the University of Virginia which explains the effects of including an ideal predictor in the forward loop of a linear error-sampled system. It has been shown that the presence of this ideal predictor tends to stabilize the class of systems considered. A prediction controller is merely a system which anticipates a signal or part of a signal before it actually occurs. It is understood that an exact prediction controller is physically unrealizable. However, in systems where the input tends to be repetitive or limited, (i.e., not random) near ideal prediction is possible. In order for the controller to act as a stability compensator, the predictor must be designed in a way that allows it to learn the expected error response of the system. In this way, an unstable system will become stable by including the predicted error in the system transfer function. Previous and current prediction controller include pattern recognition developments and fast-time simulation which are applicable to the analysis of linear sampled data type systems. The use of pattern recognition techniques, along with a template matching scheme, has been proposed as one realizable type of near-ideal prediction. Since many, if not most, systems are repeatedly subjected to similar inputs, it was proposed that an adaptive mechanism be used to 'learn' the correct predicted error response. Once the system has learned the response of all the expected inputs, it is necessary only to recognize the type of input with a template matching mechanism and then to use the correct predicted error to drive the system. Suggested here is an alternate approach to the realization of a near-ideal error prediction controller, one designed using Neural Networks. Neural Networks are good at recognizing patterns such as system responses, and the back-propagation architecture makes use of a template matching scheme. In using this type of error prediction, it is assumed that the system error

  16. Knowledge-Based Trajectory Error Pattern Method Applied to an Active Force Control Scheme

    Directory of Open Access Journals (Sweden)

    Endra Pitowarno, Musa Mailah, Hishamuddin Jamaluddin

    2012-08-01

    Full Text Available The active force control (AFC method is known as a robust control scheme that dramatically enhances the performance of a robot arm particularly in compensating the disturbance effects. The main task of the AFC method is to estimate the inertia matrix in the feedback loop to provide the correct (motor torque required to cancel out these disturbances. Several intelligent control schemes have already been introduced to enhance the estimation methods of acquiring the inertia matrix such as those using neural network, iterative learning and fuzzy logic. In this paper, we propose an alternative scheme called Knowledge-Based Trajectory Error Pattern Method (KBTEPM to suppress the trajectory track error of the AFC scheme. The knowledge is developed from the trajectory track error characteristic based on the previous experimental results of the crude approximation method. It produces a unique, new and desirable error pattern when a trajectory command is forced. An experimental study was performed using simulation work on the AFC scheme with KBTEPM applied to a two-planar manipulator in which a set of rule-based algorithm is derived. A number of previous AFC schemes are also reviewed as benchmark. The simulation results show that the AFC-KBTEPM scheme successfully reduces the trajectory track error significantly even in the presence of the introduced disturbances.Key Words:  Active force control, estimated inertia matrix, robot arm, trajectory error pattern, knowledge-based.

  17. [Statistical Process Control (SPC) can help prevent treatment errors without increasing costs in radiotherapy].

    Science.gov (United States)

    Govindarajan, R; Llueguera, E; Melero, A; Molero, J; Soler, N; Rueda, C; Paradinas, C

    2010-01-01

    Statistical Process Control (SPC) was applied to monitor patient set-up in radiotherapy and, when the measured set-up error values indicated a loss of process stability, its root cause was identified and eliminated to prevent set-up errors. Set up errors were measured for medial-lateral (ml), cranial-caudal (cc) and anterior-posterior (ap) dimensions and then the upper control limits were calculated. Once the control limits were known and the range variability was acceptable, treatment set-up errors were monitored using sub-groups of 3 patients, three times each shift. These values were plotted on a control chart in real time. Control limit values showed that the existing variation was acceptable. Set-up errors, measured and plotted on a X chart, helped monitor the set-up process stability and, if and when the stability was lost, treatment was interrupted, the particular cause responsible for the non-random pattern was identified and corrective action was taken before proceeding with the treatment. SPC protocol focuses on controlling the variability due to assignable cause instead of focusing on patient-to-patient variability which normally does not exist. Compared to weekly sampling of set-up error in each and every patient, which may only ensure that just those sampled sessions were set-up correctly, the SPC method enables set-up error prevention in all treatment sessions for all patients and, at the same time, reduces the control costs. Copyright © 2009 SECA. Published by Elsevier Espana. All rights reserved.

  18. Systematic errors in transport calculations of shear viscosity using the Green-Kubo formalism

    Science.gov (United States)

    Rose, J. B.; Torres-Rincon, J. M.; Oliinychenko, D.; Schäfer, A.; Petersen, H.

    2018-05-01

    The purpose of this study is to provide a reproducible framework in the use of the Green-Kubo formalism to extract transport coefficients. More specifically, in the case of shear viscosity, we investigate the limitations and technical details of fitting the auto-correlation function to a decaying exponential. This fitting procedure is found to be applicable for systems interacting both through constant and energy-dependent cross-sections, although this is only true for sufficiently dilute systems in the latter case. We find that the optimal fit technique consists in simultaneously fixing the intercept of the correlation function and use a fitting interval constrained by the relative error on the correlation function. The formalism is then applied to the full hadron gas, for which we obtain the shear viscosity to entropy ratio.

  19. The accuracy of webcams in 2D motion analysis: sources of error and their control

    International Nuclear Information System (INIS)

    Page, A; Candelas, P; Belmar, F; Moreno, R

    2008-01-01

    In this paper, we show the potential of webcams as precision measuring instruments in a physics laboratory. Various sources of error appearing in 2D coordinate measurements using low-cost commercial webcams are discussed, quantifying their impact on accuracy and precision, and simple procedures to control these sources of error are presented. Finally, an experiment with controlled movement is performed to experimentally measure the errors described above and to assess the effectiveness of the proposed corrective measures. It will be shown that when these aspects are considered, it is possible to obtain errors lower than 0.1%. This level of accuracy demonstrates that webcams should be considered as very precise and accurate measuring instruments at a remarkably low cost

  20. The accuracy of webcams in 2D motion analysis: sources of error and their control

    Energy Technology Data Exchange (ETDEWEB)

    Page, A; Candelas, P; Belmar, F [Departamento de Fisica Aplicada, Universidad Politecnica de Valencia, Valencia (Spain); Moreno, R [Instituto de Biomecanica de Valencia, Valencia (Spain)], E-mail: alvaro.page@ibv.upv.es

    2008-07-15

    In this paper, we show the potential of webcams as precision measuring instruments in a physics laboratory. Various sources of error appearing in 2D coordinate measurements using low-cost commercial webcams are discussed, quantifying their impact on accuracy and precision, and simple procedures to control these sources of error are presented. Finally, an experiment with controlled movement is performed to experimentally measure the errors described above and to assess the effectiveness of the proposed corrective measures. It will be shown that when these aspects are considered, it is possible to obtain errors lower than 0.1%. This level of accuracy demonstrates that webcams should be considered as very precise and accurate measuring instruments at a remarkably low cost.

  1. Analysis of Human Error Types and Performance Shaping Factors in the Next Generation Main Control Room

    International Nuclear Information System (INIS)

    Sin, Y. C.; Jung, Y. S.; Kim, K. H.; Kim, J. H.

    2008-04-01

    Main control room of nuclear power plants has been computerized and digitalized in new and modernized plants, as information and digital technologies make great progresses and become mature. Survey on human factors engineering issues in advanced MCRs: Model-based approach, Literature survey-based approach. Analysis of human error types and performance shaping factors is analysis of three human errors. The results of project can be used for task analysis, evaluation of human error probabilities, and analysis of performance shaping factors in the HRA analysis

  2. Falls and Postural Control in Older Adults With Eye Refractive Errors

    Directory of Open Access Journals (Sweden)

    Afsun Nodehi-Moghadam

    2016-04-01

    Conclusion: Vision impairment of older adults due to refractive error is not associated with an increase in falls. Furthermore, TUG test results did not show balance disorders in these groups. Further research, such as assessment of postural control with advanced devices and considering other falling risk factors is also needed to identify the predictors of falls in older adults with eye refractive errors.

  3. Systematic errors in the determination of the spectroscopic g-factor in broadband ferromagnetic resonance spectroscopy: A proposed solution

    Science.gov (United States)

    Gonzalez-Fuentes, C.; Dumas, R. K.; García, C.

    2018-01-01

    A theoretical and experimental study of the influence of small offsets of the magnetic field (δH) on the measurement accuracy of the spectroscopic g-factor (g) and saturation magnetization (Ms) obtained by broadband ferromagnetic resonance (FMR) measurements is presented. The random nature of δH generates systematic and opposite sign deviations of the values of g and Ms with respect to their true values. A δH on the order of a few Oe leads to a ˜10% error of g and Ms for a typical range of frequencies employed in broadband FMR experiments. We propose a simple experimental methodology to significantly minimize the effect of δH on the fitted values of g and Ms, eliminating their apparent dependence in the range of frequencies employed. Our method was successfully tested using broadband FMR measurements on a 5 nm thick Ni80Fe20 film for frequencies ranging between 3 and 17 GHz.

  4. Development of a framework to estimate human error for diagnosis tasks in advanced control room

    International Nuclear Information System (INIS)

    Kim, Ar Ryum; Jang, In Seok; Seong, Proong Hyun

    2014-01-01

    In the emergency situation of nuclear power plants (NPPs), a diagnosis of the occurring events is crucial for managing or controlling the plant to a safe and stable condition. If the operators fail to diagnose the occurring events or relevant situations, their responses can eventually inappropriate or inadequate Accordingly, huge researches have been performed to identify the cause of diagnosis error and estimate the probability of diagnosis error. D.I Gertman et al. asserted that 'the cognitive failures stem from erroneous decision-making, poor understanding of rules and procedures, and inadequate problem solving and this failures may be due to quality of data and people's capacity for processing information'. Also many researchers have asserted that human-system interface (HSI), procedure, training and available time are critical factors to cause diagnosis error. In nuclear power plants, a diagnosis of the event is critical for safe condition of the system. As advanced main control room is being adopted in nuclear power plants, the operators may obtain the plant data via computer-based HSI and procedure. Also many researchers have asserted that HSI, procedure, training and available time are critical factors to cause diagnosis error. In this regards, using simulation data, diagnosis errors and its causes were identified. From this study, some useful insights to reduce diagnosis errors of operators in advanced main control room were provided

  5. MEASUREMENT ERROR EFFECT ON THE POWER OF CONTROL CHART FOR ZERO-TRUNCATED POISSON DISTRIBUTION

    Directory of Open Access Journals (Sweden)

    Ashit Chakraborty

    2013-09-01

    Full Text Available Measurement error is the difference between the true value and the measured value of a quantity that exists in practice and may considerably affect the performance of control charts in some cases. Measurement error variability has uncertainty which can be from several sources. In this paper, we have studied the effect of these sources of variability on the power characteristics of control chart and obtained the values of average run length (ARL for zero-truncated Poisson distribution (ZTPD. Expression of the power of control chart for variable sample size under standardized normal variate for ZTPD is also derived.

  6. Masked and unmasked error-related potentials during continuous control and feedback

    Science.gov (United States)

    Lopes Dias, Catarina; Sburlea, Andreea I.; Müller-Putz, Gernot R.

    2018-06-01

    The detection of error-related potentials (ErrPs) in tasks with discrete feedback is well established in the brain–computer interface (BCI) field. However, the decoding of ErrPs in tasks with continuous feedback is still in its early stages. Objective. We developed a task in which subjects have continuous control of a cursor’s position by means of a joystick. The cursor’s position was shown to the participants in two different modalities of continuous feedback: normal and jittered. The jittered feedback was created to mimic the instability that could exist if participants controlled the trajectory directly with brain signals. Approach. This paper studies the electroencephalographic (EEG)—measurable signatures caused by a loss of control over the cursor’s trajectory, causing a target miss. Main results. In both feedback modalities, time-locked potentials revealed the typical frontal-central components of error-related potentials. Errors occurring during the jittered feedback (masked errors) were delayed in comparison to errors occurring during normal feedback (unmasked errors). Masked errors displayed lower peak amplitudes than unmasked errors. Time-locked classification analysis allowed a good distinction between correct and error classes (average Cohen-, average TPR  =  81.8% and average TNR  =  96.4%). Time-locked classification analysis between masked error and unmasked error classes revealed results at chance level (average Cohen-, average TPR  =  60.9% and average TNR  =  58.3%). Afterwards, we performed asynchronous detection of ErrPs, combining both masked and unmasked trials. The asynchronous detection of ErrPs in a simulated online scenario resulted in an average TNR of 84.0% and in an average TPR of 64.9%. Significance. The time-locked classification results suggest that the masked and unmasked errors were indistinguishable in terms of classification. The asynchronous classification results suggest that the

  7. Avoiding Systematic Errors in Isometric Squat-Related Studies without Pre-Familiarization by Using Sufficient Numbers of Trials

    Directory of Open Access Journals (Sweden)

    Pekünlü Ekim

    2014-10-01

    Full Text Available There is no scientific evidence in the literature indicating that maximal isometric strength measures can be assessed within 3 trials. We questioned whether the results of isometric squat-related studies in which maximal isometric squat strength (MISS testing was performed using limited numbers of trials without pre-familiarization might have included systematic errors, especially those resulting from acute learning effects. Forty resistance-trained male participants performed 8 isometric squat trials without pre-familiarization. The highest measures in the first “n” trials (3 ≤ n ≤ 8 of these 8 squats were regarded as MISS obtained using 6 different MISS test methods featuring different numbers of trials (The Best of n Trials Method [BnT]. When B3T and B8T were paired with other methods, high reliability was found between the paired methods in terms of intraclass correlation coefficients (0.93-0.98 and coefficients of variation (3.4-7.0%. The Wilcoxon’s signed rank test indicated that MISS obtained using B3T and B8T were lower (p < 0.001 and higher (p < 0.001, respectively, than those obtained using other methods. The Bland- Altman method revealed a lack of agreement between any of the paired methods. Simulation studies illustrated that increasing the number of trials to 9-10 using a relatively large sample size (i.e., ≥ 24 could be an effective means of obtaining the actual MISS values of the participants. The common use of a limited number of trials in MISS tests without pre-familiarization appears to have no solid scientific base. Our findings suggest that the number of trials should be increased in commonly used MISS tests to avoid learning effect-related systematic errors

  8. A parallel row-based algorithm for standard cell placement with integrated error control

    Science.gov (United States)

    Sargent, Jeff S.; Banerjee, Prith

    1989-01-01

    A new row-based parallel algorithm for standard-cell placement targeted for execution on a hypercube multiprocessor is presented. Key features of this implementation include a dynamic simulated-annealing schedule, row-partitioning of the VLSI chip image, and two novel approaches to control error in parallel cell-placement algorithms: (1) Heuristic Cell-Coloring; (2) Adaptive Sequence Length Control.

  9. Optimal control strategy to reduce the temporal wavefront error in AO systems

    NARCIS (Netherlands)

    Doelman, N.J.; Hinnen, K.J.G.; Stoffelen, F.J.G.; Verhaegen, M.H.

    2004-01-01

    An Adaptive Optics (AO) system for astronomy is analysed from a control point of view. The focus is put on the temporal error. The AO controller is identified as a feedback regulator system, operating in closed-loop with the aim of rejecting wavefront disturbances. Limitations on the performance of

  10. Range camera on conveyor belts: estimating size distribution and systematic errors due to occlusion

    Science.gov (United States)

    Blomquist, Mats; Wernersson, Ake V.

    1999-11-01

    When range cameras are used for analyzing irregular material on a conveyor belt there will be complications like missing segments caused by occlusion. Also, a number of range discontinuities will be present. In a frame work towards stochastic geometry, conditions are found for the cases when range discontinuities take place. The test objects in this paper are pellets for the steel industry. An illuminating laser plane will give range discontinuities at the edges of each individual object. These discontinuities are used to detect and measure the chord created by the intersection of the laser plane and the object. From the measured chords we derive the average diameter and its variance. An improved method is to use a pair of parallel illuminating light planes to extract two chords. The estimation error for this method is not larger than the natural shape fluctuations (the difference in diameter) for the pellets. The laser- camera optronics is sensitive enough both for material on a conveyor belt and free falling material leaving the conveyor.

  11. Systematical and statistical errors in using reference light sources to calibrate TLD readers

    International Nuclear Information System (INIS)

    Burgkhardt, B.; Piesch, E.

    1981-01-01

    Three light sources, namely an NaI(Tl) scintillator + Ra, an NaI(Tl) scintillator + 14 C and a plastic scintillator + 14 C, were used during a period of 24 months for a daily check of two TLD readers: the Harshaw 2000 A + B and the Toledo 651. On the basis of light source measurements long-term changes and day-to-day fluctuations of the reader response were investigated. Systematical changes of the Toledo reader response of up to 6% during a working week are explained by nitrogen effects in the plastic scintillator light source. It was found that the temperature coefficient of the light source intensity was -0.05%/ 0 C for the plastic scintillator and -0.3%/ 0 C for the NaI(Tl) scintillator. The 210 Pb content in the Ra activated NaI(Tl) scintillator caused a time-dependent decrease in light source intensity of 3%/yr for the light source in the Harshaw reader. The internal light sources revealed a relative standard deviation of 0.5% for the Toledo reader and the Harshaw reader after respective reading times of 0.45 and 100 sec. (author)

  12. Resilience to emotional distress in response to failure, error or mistakes: A systematic review.

    Science.gov (United States)

    Johnson, Judith; Panagioti, Maria; Bass, Jennifer; Ramsey, Lauren; Harrison, Reema

    2017-03-01

    Perceptions of failure have been implicated in a range of psychological disorders, and even a single experience of failure can heighten anxiety and depression. However, not all individuals experience significant emotional distress following failure, indicating the presence of resilience. The current systematic review synthesised studies investigating resilience factors to emotional distress resulting from the experience of failure. For the definition of resilience we used the Bi-Dimensional Framework for resilience research (BDF) which suggests that resilience factors are those which buffer the impact of risk factors, and outlines criteria a variable should meet in order to be considered as conferring resilience. Studies were identified through electronic searches of PsycINFO, MEDLINE, EMBASE and Web of Knowledge. Forty-six relevant studies reported in 38 papers met the inclusion criteria. These provided evidence of the presence of factors which confer resilience to emotional distress in response to failure. The strongest support was found for the factors of higher self-esteem, more positive attributional style, and lower socially-prescribed perfectionism. Weaker evidence was found for the factors of lower trait reappraisal, lower self-oriented perfectionism and higher emotional intelligence. The majority of studies used experimental or longitudinal designs. These results identify specific factors which should be targeted by resilience-building interventions. Resilience; failure; stress; self-esteem; attributional style; perfectionism. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Reducing Systematic Errors for Seismic Event Locations Using a Model Incorporating Anisotropic Regional Structures

    National Research Council Canada - National Science Library

    Smith, Gideon P; Wiens, Douglas A

    2006-01-01

    ...) to predict travel times of P-wave propagation at distances of 2 - 14 degrees. At such distances, the phase Pn is in the seismic phase that is most frequently reported and that thus controls the location accuracy...

  14. Constituent quarks and systematic errors in mid-rapidity charged multiplicity dNch/dη distributions

    Science.gov (United States)

    Tannenbaum, M. J.

    2018-01-01

    Centrality definition in A + A collisions at colliders such as RHIC and LHC suffers from a correlated systematic uncertainty caused by the efficiency of detecting a p + p collision (50 ± 5% for PHENIX at RHIC). In A + A collisions where centrality is measured by the number of nucleon collisions, Ncoll, or the number of nucleon participants, Npart, or the number of constituent quark participants, Nqp, the error in the efficiency of the primary interaction trigger (Beam-Beam Counters) for a p + p collision leads to a correlated systematic uncertainty in Npart, Ncoll or Nqp which reduces binomially as the A + A collisions become more central. If this is not correctly accounted for in projections of A + A to p + p collisions, then mistaken conclusions can result. A recent example is presented in whether the mid-rapidity charged multiplicity per constituent quark participant (dNch/dη)/Nqp in Au + Au at RHIC was the same as the value in p + p collisions.

  15. What Makes Hydrologic Models Differ? Using SUMMA to Systematically Explore Model Uncertainty and Error

    Science.gov (United States)

    Bennett, A.; Nijssen, B.; Chegwidden, O.; Wood, A.; Clark, M. P.

    2017-12-01

    Model intercomparison experiments have been conducted to quantify the variability introduced during the model development process, but have had limited success in identifying the sources of this model variability. The Structure for Unifying Multiple Modeling Alternatives (SUMMA) has been developed as a framework which defines a general set of conservation equations for mass and energy as well as a common core of numerical solvers along with the ability to set options for choosing between different spatial discretizations and flux parameterizations. SUMMA can be thought of as a framework for implementing meta-models which allows for the investigation of the impacts of decisions made during the model development process. Through this flexibility we develop a hierarchy of definitions which allows for models to be compared to one another. This vocabulary allows us to define the notion of weak equivalence between model instantiations. Through this weak equivalence we develop the concept of model mimicry, which can be used to investigate the introduction of uncertainty and error during the modeling process as well as provide a framework for identifying modeling decisions which may complement or negate one another. We instantiate SUMMA instances that mimic the behaviors of the Variable Infiltration Capacity (VIC) model and the Precipitation Runoff Modeling System (PRMS) by choosing modeling decisions which are implemented in each model. We compare runs from these models and their corresponding mimics across the Columbia River Basin located in the Pacific Northwest of the United States and Canada. From these comparisons, we are able to determine the extent to which model implementation has an effect on the results, as well as determine the changes in sensitivity of parameters due to these implementation differences. By examining these changes in results and sensitivities we can attempt to postulate changes in the modeling decisions which may provide better estimation of

  16. Using Feedback Error Learning for Control of Electro Hydraulic Servo System by Laguerre

    Directory of Open Access Journals (Sweden)

    Amir Reza Zare Bidaki

    2014-01-01

    Full Text Available In this paper, a new Laguerre controller is proposed to control the electro hydraulic servo system. The proposed controller uses feedback error learning method and leads to significantly improve performance in terms of settling time and amplitude of control signal rather than other controllers. All derived results are validated by simulation of nonlinear mathematical model of the system. The simulation results show the advantages of the proposed method for improved control in terms of both settling time and amplitude of control signal.

  17. Computer Simulation Tests of Feedback Error Learning Controller with IDM and ISM for Functional Electrical Stimulation in Wrist Joint Control

    OpenAIRE

    Watanabe, Takashi; Sugi, Yoshihiro

    2010-01-01

    Feedforward controller would be useful for hybrid Functional Electrical Stimulation (FES) system using powered orthotic devices. In this paper, Feedback Error Learning (FEL) controller for FES (FEL-FES controller) was examined using an inverse statics model (ISM) with an inverse dynamics model (IDM) to realize a feedforward FES controller. For FES application, the ISM was tested in learning off line using training data obtained by PID control of very slow movements. Computer simulation tests ...

  18. Modeling SMAP Spacecraft Attitude Control Estimation Error Using Signal Generation Model

    Science.gov (United States)

    Rizvi, Farheen

    2016-01-01

    Two ground simulation software are used to model the SMAP spacecraft dynamics. The CAST software uses a higher fidelity model than the ADAMS software. The ADAMS software models the spacecraft plant, controller and actuator models, and assumes a perfect sensor and estimator model. In this simulation study, the spacecraft dynamics results from the ADAMS software are used as CAST software is unavailable. The main source of spacecraft dynamics error in the higher fidelity CAST software is due to the estimation error. A signal generation model is developed to capture the effect of this estimation error in the overall spacecraft dynamics. Then, this signal generation model is included in the ADAMS software spacecraft dynamics estimate such that the results are similar to CAST. This signal generation model has similar characteristics mean, variance and power spectral density as the true CAST estimation error. In this way, ADAMS software can still be used while capturing the higher fidelity spacecraft dynamics modeling from CAST software.

  19. Systematic errors in respiratory gating due to intrafraction deformations of the liver

    International Nuclear Information System (INIS)

    Siebenthal, Martin von; Szekely, Gabor; Lomax, Antony J.; Cattin, Philippe C.

    2007-01-01

    This article shows the limitations of respiratory gating due to intrafraction deformations of the right liver lobe. The variability of organ shape and motion over tens of minutes was taken into account for this evaluation, which closes the gap between short-term analysis of a few regular cycles, as it is possible with 4DCT, and long-term analysis of interfraction motion. Time resolved MR volumes (4D MR sequences) were reconstructed for 12 volunteers and subsequent non-rigid registration provided estimates of the 3D trajectories of points within the liver over time. The full motion during free breathing and its distribution over the liver were quantified and respiratory gating was simulated to determine the gating accuracy for different gating signals, duty cycles, and different intervals between patient setup and treatment. Gating effectively compensated for the respiratory motion within short sequences (3 min), but deformations, mainly in the anterior inferior part (Couinaud segments IVb and V), led to systematic deviations from the setup position of more than 5 mm in 7 of 12 subjects after 20 min. We conclude that measurements over a few breathing cycles should not be used as a proof of accurate reproducibility of motion, not even within the same fraction, if it is longer than a few minutes. Although the diaphragm shows the largest magnitude of motion, it should not be used to assess the gating accuracy over the entire liver because the reproducibility is typically much more limited in inferior parts. Simple gating signals, such as the trajectory of skin motion, can detect the exhalation phase, but do not allow for an absolute localization of the complete liver over longer periods because the drift of these signals does not necessarily correlate with the internal drift

  20. ERESYE - a expert system for the evaluation of uncertainties related to systematic experimental errors; ERESYE - un sistema esperto per la valutazione di incertezze correlate ad errori sperimentali sistematici

    Energy Technology Data Exchange (ETDEWEB)

    Martinelli, T; Panini, G C [ENEA - Dipartimento Tecnologie Intersettoriali di Base, Centro Ricerche Energia, Casaccia (Italy); Amoroso, A [Ricercatore Ospite (Italy)

    1989-11-15

    Information about systematic errors are not given In EXFOR, the data base of nuclear experimental measurements: their assessment is committed to the ability of the evaluator. A tool Is needed which performs this task in a fully automatic way or, at least, gives a valuable aid. The expert system ERESYE has been implemented for investigating the feasibility of an automatic evaluation of the systematic errors in the experiments. The features of the project which led to the implementation of the system are presented. (author)

  1. Negative control exposure studies in the presence of measurement error: implications for attempted effect estimate calibration.

    Science.gov (United States)

    Sanderson, Eleanor; Macdonald-Wallis, Corrie; Davey Smith, George

    2018-04-01

    Negative control exposure studies are increasingly being used in epidemiological studies to strengthen causal inference regarding an exposure-outcome association when unobserved confounding is thought to be present. Negative control exposure studies contrast the magnitude of association of the negative control, which has no causal effect on the outcome but is associated with the unmeasured confounders in the same way as the exposure, with the magnitude of the association of the exposure with the outcome. A markedly larger effect of the exposure on the outcome than the negative control on the outcome strengthens inference that the exposure has a causal effect on the outcome. We investigate the effect of measurement error in the exposure and negative control variables on the results obtained from a negative control exposure study. We do this in models with continuous and binary exposure and negative control variables using analysis of the bias of the estimated coefficients and Monte Carlo simulations. Our results show that measurement error in either the exposure or negative control variables can bias the estimated results from the negative control exposure study. Measurement error is common in the variables used in epidemiological studies; these results show that negative control exposure studies cannot be used to precisely determine the size of the effect of the exposure variable, or adequately adjust for unobserved confounding; however, they can be used as part of a body of evidence to aid inference as to whether a causal effect of the exposure on the outcome is present.

  2. Systematic identification and robust control design for uncertain time delay processes

    DEFF Research Database (Denmark)

    Huusom, Jakob Kjøbsted; Poulsen, Niels Kjølstad; Jørgensen, Sten Bay

    2011-01-01

    A systematic procedure is proposed to handle the standard process control problem. The considered standard problem involves infrequent step disturbances to processes with large delays and measurement noise. The process is modeled as an ARX model and extended with a suitable noise model in order...... to reject unmeasured step disturbances and unavoidable model errors. This controller is illustrated to perform well for both set point tracking and a disturbance rejection for a SISO process example of a furnace which has a time delay which is significantly longer than the dominating time constant....

  3. Design of power controller in CDMA system with power and SIR error minimization

    Institute of Scientific and Technical Information of China (English)

    Shulan KONG; Huanshui ZHANG; Zhaosheng ZHANG; Hongxia WANG

    2007-01-01

    In this paper, an uplink power control problem is considered for code division multiple access (CDMA) systems. A distributed algorithm is proposed based on linear quadratic optimal control theory. The proposed scheme minimizes the sum of the power and the error of signal-to-interference ratio (SIR). A power controller is designed by constructing an optimization problem of a stochastic linear quadratic type in Krein space and solving a Kalman filter problem.

  4. Cognitive Impairments in Occupational Burnout – Error Processing and Its Indices of Reactive and Proactive Control

    Directory of Open Access Journals (Sweden)

    Krystyna Golonka

    2017-05-01

    Full Text Available The presented study refers to cognitive aspects of burnout as the effects of long-term work-related stress. The purpose of the study was to investigate electrophysiological correlates of burnout to explain the mechanisms of the core burnout symptoms: exhaustion and depersonalization/cynicism. The analyzed error-related electrophysiological markers shed light on impaired cognitive mechanisms and the specific changes in information-processing in burnout. In the EEG study design (N = 80, two components of error-related potential (ERP, error-related negativity (ERN, and error positivity (Pe, were analyzed. In the non-clinical burnout group (N = 40, a significant increase in ERN amplitude and a decrease in Pe amplitude were observed compared to controls (N = 40. Enhanced error detection, indexed by increased ERN amplitude, and diminished response monitoring, indexed by decreased Pe amplitude, reveal emerging cognitive problems in the non-clinical burnout group. Cognitive impairments in burnout subjects relate to both reactive and unconscious (ERN and proactive and conscious (Pe aspects of error processing. The results indicate a stronger ‘reactive control mode’ that can deplete resources for proactive control and the ability to actively maintain goals. The analysis refers to error processing and specific task demands, thus should not be extended to cognitive processes in general. The characteristics of ERP patterns in burnout resemble psychophysiological indexes of anxiety (increased ERN and depressive symptoms (decreased Pe, showing to some extent an overlapping effect of burnout and related symptoms and disorders. The results support the scarce existing data on the psychobiological nature of burnout, while extending and specifying its cognitive characteristics.

  5. Correcting groove error in gratings ruled on a 500-mm ruling engine using interferometric control.

    Science.gov (United States)

    Mi, Xiaotao; Yu, Haili; Yu, Hongzhu; Zhang, Shanwen; Li, Xiaotian; Yao, Xuefeng; Qi, Xiangdong; Bayinhedhig; Wan, Qiuhua

    2017-07-20

    Groove error is one of the most important factors affecting grating quality and spectral performance. To reduce groove error, we propose a new ruling-tool carriage system based on aerostatic guideways. We design a new blank carriage system with double piezoelectric actuators. We also propose a completely closed-loop servo-control system with a new optical measurement system that can control the position of the diamond relative to the blank. To evaluate our proposed methods, we produced several gratings, including an echelle grating with 79  grooves/mm, a grating with 768  grooves/mm, and a high-density grating with 6000  grooves/mm. The results show that our methods effectively reduce groove error in ruled gratings.

  6. Periodic boundary conditions and the error-controlled fast multipole method

    Energy Technology Data Exchange (ETDEWEB)

    Kabadshow, Ivo

    2012-08-22

    The simulation of pairwise interactions in huge particle ensembles is a vital issue in scientific research. Especially the calculation of long-range interactions poses limitations to the system size, since these interactions scale quadratically with the number of particles. Fast summation techniques like the Fast Multipole Method (FMM) can help to reduce the complexity to O(N). This work extends the possible range of applications of the FMM to periodic systems in one, two and three dimensions with one unique approach. Together with a tight error control, this contribution enables the simulation of periodic particle systems for different applications without the need to know and tune the FMM specific parameters. The implemented error control scheme automatically optimizes the parameters to obtain an approximation for the minimal runtime for a given energy error bound.

  7. Quantization-Based Adaptive Actor-Critic Tracking Control With Tracking Error Constraints.

    Science.gov (United States)

    Fan, Quan-Yong; Yang, Guang-Hong; Ye, Dan

    2018-04-01

    In this paper, the problem of adaptive actor-critic (AC) tracking control is investigated for a class of continuous-time nonlinear systems with unknown nonlinearities and quantized inputs. Different from the existing results based on reinforcement learning, the tracking error constraints are considered and new critic functions are constructed to improve the performance further. To ensure that the tracking errors keep within the predefined time-varying boundaries, a tracking error transformation technique is used to constitute an augmented error system. Specific critic functions, rather than the long-term cost function, are introduced to supervise the tracking performance and tune the weights of the AC neural networks (NNs). A novel adaptive controller with a special structure is designed to reduce the effect of the NN reconstruction errors, input quantization, and disturbances. Based on the Lyapunov stability theory, the boundedness of the closed-loop signals and the desired tracking performance can be guaranteed. Finally, simulations on two connected inverted pendulums are given to illustrate the effectiveness of the proposed method.

  8. Partitioning,Automation and Error Recovery in the Control and Monitoring System of an LHC Experiment

    Institute of Scientific and Technical Information of China (English)

    C.Gaspar

    2001-01-01

    The Joint Controls Project(JCOP)is a collaboration between CERN and the four LHC experiments to find and implement common solutions for their control and monitoring systems.As part of this project and Architecture Working Group was set up in order to study the requirements and devise an architectural model that would suit the four experiments.Many issues were studied by this working group:Alarm handling,Access Control,Hierarchical Control,etc.This paper will report on the specific issue of hierarchical control and in particular partitioning,automation and error recovery.

  9. Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation

    Science.gov (United States)

    Prentice, J. S. C.

    2012-01-01

    An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…

  10. Control strategies for active noise barriers using near-field error sensing

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    In this paper active noise control strategies for noise barriers are presented which are based on the use of sensors near the noise barrier. Virtual error signals are derived from these near-field sensor signals such that reductions of the far-field sound pressure are obtained with the active

  11. Deciphering the genetic regulatory code using an inverse error control coding framework.

    Energy Technology Data Exchange (ETDEWEB)

    Rintoul, Mark Daniel; May, Elebeoba Eni; Brown, William Michael; Johnston, Anna Marie; Watson, Jean-Paul

    2005-03-01

    We have found that developing a computational framework for reconstructing error control codes for engineered data and ultimately for deciphering genetic regulatory coding sequences is a challenging and uncharted area that will require advances in computational technology for exact solutions. Although exact solutions are desired, computational approaches that yield plausible solutions would be considered sufficient as a proof of concept to the feasibility of reverse engineering error control codes and the possibility of developing a quantitative model for understanding and engineering genetic regulation. Such evidence would help move the idea of reconstructing error control codes for engineered and biological systems from the high risk high payoff realm into the highly probable high payoff domain. Additionally this work will impact biological sensor development and the ability to model and ultimately develop defense mechanisms against bioagents that can be engineered to cause catastrophic damage. Understanding how biological organisms are able to communicate their genetic message efficiently in the presence of noise can improve our current communication protocols, a continuing research interest. Towards this end, project goals include: (1) Develop parameter estimation methods for n for block codes and for n, k, and m for convolutional codes. Use methods to determine error control (EC) code parameters for gene regulatory sequence. (2) Develop an evolutionary computing computational framework for near-optimal solutions to the algebraic code reconstruction problem. Method will be tested on engineered and biological sequences.

  12. The influence of random and systematic errors on a general definition of minimum detectable amount (MDA) applicable to all radiobioassay measurements

    International Nuclear Information System (INIS)

    Brodsky, A.

    1985-01-01

    An approach to defining minimum detectable amount (MDA) of radioactivity in a sample will be discussed, with the aim of obtaining comments helpful in developing a formulation of MDA that will be broadly applicable to all kinds of radiobioassay measurements, and acceptable to the scientists who make these measurements. Also, the influence of random and systematic errors on the defined MDA are examined

  13. Impact of controlling the sum of error probability in the sequential probability ratio test

    Directory of Open Access Journals (Sweden)

    Bijoy Kumarr Pradhan

    2013-05-01

    Full Text Available A generalized modified method is proposed to control the sum of error probabilities in sequential probability ratio test to minimize the weighted average of the two average sample numbers under a simple null hypothesis and a simple alternative hypothesis with the restriction that the sum of error probabilities is a pre-assigned constant to find the optimal sample size and finally a comparison is done with the optimal sample size found from fixed sample size procedure. The results are applied to the cases when the random variate follows a normal law as well as Bernoullian law.

  14. Performance of an Error Control System with Turbo Codes in Powerline Communications

    Directory of Open Access Journals (Sweden)

    Balbuena-Campuzano Carlos Alberto

    2014-07-01

    Full Text Available This paper reports the performance of turbo codes as an error control technique in PLC (Powerline Communications data transmissions. For this system, computer simulations are used for modeling data networks based on the model classified in technical literature as indoor, and uses OFDM (Orthogonal Frequency Division Multiplexing as a modulation technique. Taking into account the channel, modulation and turbo codes, we propose a methodology to minimize the bit error rate (BER, as a function of the average received signal noise ratio (SNR.

  15. Correcting errors in a quantum gate with pushed ions via optimal control

    International Nuclear Information System (INIS)

    Poulsen, Uffe V.; Sklarz, Shlomo; Tannor, David; Calarco, Tommaso

    2010-01-01

    We analyze in detail the so-called pushing gate for trapped ions, introducing a time-dependent harmonic approximation for the external motion. We show how to extract the average fidelity for the gate from the resulting semiclassical simulations. We characterize and quantify precisely all types of errors coming from the quantum dynamics and reveal that slight nonlinearities in the ion-pushing force can have a dramatic effect on the adiabaticity of gate operation. By means of quantum optimal control techniques, we show how to suppress each of the resulting gate errors in order to reach a high fidelity compatible with scalable fault-tolerant quantum computing.

  16. Correcting errors in a quantum gate with pushed ions via optimal control

    DEFF Research Database (Denmark)

    Poulsen, Uffe Vestergaard; Sklarz, Shlomo; Tannor, David

    2010-01-01

    We analyze in detail the so-called pushing gate for trapped ions, introducing a time-dependent harmonic approximation for the external motion. We show how to extract the average fidelity for the gate from the resulting semiclassical simulations. We characterize and quantify precisely all types...... of errors coming from the quantum dynamics and reveal that slight nonlinearities in the ion-pushing force can have a dramatic effect on the adiabaticity of gate operation. By means of quantum optimal control techniques, we show how to suppress each of the resulting gate errors in order to reach a high...

  17. Electronic laboratory system reduces errors in National Tuberculosis Program: a cluster randomized controlled trial.

    Science.gov (United States)

    Blaya, J A; Shin, S S; Yale, G; Suarez, C; Asencios, L; Contreras, C; Rodriguez, P; Kim, J; Cegielski, P; Fraser, H S F

    2010-08-01

    To evaluate the impact of the e-Chasqui laboratory information system in reducing reporting errors compared to the current paper system. Cluster randomized controlled trial in 76 health centers (HCs) between 2004 and 2008. Baseline data were collected every 4 months for 12 months. HCs were then randomly assigned to intervention (e-Chasqui) or control (paper). Further data were collected for the same months the following year. Comparisons were made between intervention and control HCs, and before and after the intervention. Intervention HCs had respectively 82% and 87% fewer errors in reporting results for drug susceptibility tests (2.1% vs. 11.9%, P = 0.001, OR 0.17, 95%CI 0.09-0.31) and cultures (2.0% vs. 15.1%, P Chasqui users sent on average three electronic error reports per week to the laboratories. e-Chasqui reduced the number of missing laboratory results at point-of-care health centers. Clinical users confirmed viewing electronic results not available on paper. Reporting errors to the laboratory using e-Chasqui promoted continuous quality improvement. The e-Chasqui laboratory information system is an important part of laboratory infrastructure improvements to support multidrug-resistant tuberculosis care in Peru.

  18. Measuring galaxy cluster masses with CMB lensing using a Maximum Likelihood estimator: statistical and systematic error budgets for future experiments

    Energy Technology Data Exchange (ETDEWEB)

    Raghunathan, Srinivasan; Patil, Sanjaykumar; Bianchini, Federico; Reichardt, Christian L. [School of Physics, University of Melbourne, 313 David Caro building, Swanston St and Tin Alley, Parkville VIC 3010 (Australia); Baxter, Eric J. [Department of Physics and Astronomy, University of Pennsylvania, 209 S. 33rd Street, Philadelphia, PA 19104 (United States); Bleem, Lindsey E. [Argonne National Laboratory, High-Energy Physics Division, 9700 S. Cass Avenue, Argonne, IL 60439 (United States); Crawford, Thomas M. [Kavli Institute for Cosmological Physics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637 (United States); Holder, Gilbert P. [Department of Astronomy and Department of Physics, University of Illinois, 1002 West Green St., Urbana, IL 61801 (United States); Manzotti, Alessandro, E-mail: srinivasan.raghunathan@unimelb.edu.au, E-mail: s.patil2@student.unimelb.edu.au, E-mail: ebax@sas.upenn.edu, E-mail: federico.bianchini@unimelb.edu.au, E-mail: bleeml@uchicago.edu, E-mail: tcrawfor@kicp.uchicago.edu, E-mail: gholder@illinois.edu, E-mail: manzotti@uchicago.edu, E-mail: christian.reichardt@unimelb.edu.au [Department of Astronomy and Astrophysics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637 (United States)

    2017-08-01

    We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, we examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.

  19. Analysis of systematic error deviation of water temperature measurement at the fuel channel outlet of the reactor Maria

    International Nuclear Information System (INIS)

    Bykowski, W.

    2000-01-01

    The reactor Maria has two primary cooling circuits; fuel channels cooling circuit and reactor pool cooling circuit. Fuel elements are placed inside the fuel channels which are parallely linked in parallel, between the collectors. In the course of reactor operation the following measurements are performed: continuous measurement of water temperature at the fuel channels inlet, continuous measurement of water temperature at the outlet of each fuel channel and continuous measurement of water flow rate through each fuel channel. Based on those thermal-hydraulic parameters the instantaneous thermal power generated in each fuel channel is determined and by use of that value the thermal balance and the degree of fuel burnup is assessed. The work contains an analysis concerning estimate of the systematic error of temperature measurement at outlet of each fuel channel and so the erroneous assessment of thermal power extracted in each fuel channel and the burnup degree for the individual fuel element. The results of measurements of separate factors of deviations for the fuel channels are enclosed. (author)

  20. Analysis of Wind Speed Forecasting Error Effects on Automatic Generation Control Performance

    Directory of Open Access Journals (Sweden)

    H. Rajabi Mashhadi

    2014-09-01

    Full Text Available The main goal of this paper is to study statistical indices and evaluate AGC indices in power system which has large penetration of the WTGs. Increasing penetration of wind turbine generations, needs to study more about impacts of it on power system frequency control. Frequency control is changed with unbalancing real-time system generation and load . Also wind turbine generations have more fluctuations and make system more unbalance. Then AGC loop helps to adjust system frequency and the scheduled tie-line powers. The quality of AGC loop is measured by some indices. A good index is a proper measure shows the AGC performance just as the power system operates. One of well-known measures in literature which was introduced by NERC is Control Performance Standards(CPS. Previously it is claimed that a key factor in CPS index is related to standard deviation of generation error, installed power and frequency response. This paper focuses on impact of a several hours-ahead wind speed forecast error on this factor. Furthermore evaluation of conventional control performances in the power systems with large-scale wind turbine penetration is studied. Effects of wind speed standard deviation and also degree of wind farm penetration are analyzed and importance of mentioned factor are criticized. In addition, influence of mean wind speed forecast error on this factor is investigated. The study system is a two area system which there is significant wind farm in one of those. The results show that mean wind speed forecast error has considerable effect on AGC performance while the mentioned key factor is insensitive to this mean error.

  1. An Error-Entropy Minimization Algorithm for Tracking Control of Nonlinear Stochastic Systems with Non-Gaussian Variables

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yunlong; Wang, Aiping; Guo, Lei; Wang, Hong

    2017-07-09

    This paper presents an error-entropy minimization tracking control algorithm for a class of dynamic stochastic system. The system is represented by a set of time-varying discrete nonlinear equations with non-Gaussian stochastic input, where the statistical properties of stochastic input are unknown. By using Parzen windowing with Gaussian kernel to estimate the probability densities of errors, recursive algorithms are then proposed to design the controller such that the tracking error can be minimized. The performance of the error-entropy minimization criterion is compared with the mean-square-error minimization in the simulation results.

  2. Comparing Interval Management Control Laws for Steady-State Errors and String Stability

    Science.gov (United States)

    Weitz, Lesley A.; Swieringa, Kurt A.

    2018-01-01

    Interval Management (IM) is a future airborne spacing concept that leverages avionics to provide speed guidance to an aircraft to achieve and maintain a specified spacing interval from another aircraft. The design of a speed control law to achieve the spacing goal is a key aspect in the research and development of the IM concept. In this paper, two control laws that are used in much of the contemporary IM research are analyzed and compared to characterize steady-state errors and string stability. Numerical results are used to illustrate how the choice of control laws gains impacts the size of steady-state errors and string performance and the potential trade-offs between those performance characteristics.

  3. Nonlinear adaptive control system design with asymptotically stable parameter estimation error

    Science.gov (United States)

    Mishkov, Rumen; Darmonski, Stanislav

    2018-01-01

    The paper presents a new general method for nonlinear adaptive system design with asymptotic stability of the parameter estimation error. The advantages of the approach include asymptotic unknown parameter estimation without persistent excitation and capability to directly control the estimates transient response time. The method proposed modifies the basic parameter estimation dynamics designed via a known nonlinear adaptive control approach. The modification is based on the generalised prediction error, a priori constraints with a hierarchical parameter projection algorithm, and the stable data accumulation concepts. The data accumulation principle is the main tool for achieving asymptotic unknown parameter estimation. It relies on the parametric identifiability system property introduced. Necessary and sufficient conditions for exponential stability of the data accumulation dynamics are derived. The approach is applied in a nonlinear adaptive speed tracking vector control of a three-phase induction motor.

  4. Response Accuracy and Tracking Errors with Decentralized Control of Commercial V2G Chargers

    DEFF Research Database (Denmark)

    Ziras, Charalampos; Zecchino, Antonio; Marinelli, Mattia

    2018-01-01

    There is a growing interest in using the flexibility of electric vehicles (EVs) to provide power system services, such as fast frequency regulation. Decentralized control is advocated due to its reliability and much lower communication requirements. A commonly used linear droop characteristic...... results in low average efficiencies, whereas controllers with 3 modes (idle, fully charging, fully discharging) result in large reserve errors when the aggregation size is small. To address these issues, we propose a stochastic, decentralized controller with tunable response granularity which minimizes...... switching actions. The EV fleet operator can optimize the chargers’ performance according to the fleet size, the service error requirements, the average switching rate and the average efficiency. We use real efficiency characteristics from EVs and chargers providing fast frequency regulation and we show...

  5. Making the error-controlling algorithm of observable operator models constructive.

    Science.gov (United States)

    Zhao, Ming-Jie; Jaeger, Herbert; Thon, Michael

    2009-12-01

    Observable operator models (OOMs) are a class of models for stochastic processes that properly subsumes the class that can be modeled by finite-dimensional hidden Markov models (HMMs). One of the main advantages of OOMs over HMMs is that they admit asymptotically correct learning algorithms. A series of learning algorithms has been developed, with increasing computational and statistical efficiency, whose recent culmination was the error-controlling (EC) algorithm developed by the first author. The EC algorithm is an iterative, asymptotically correct algorithm that yields (and minimizes) an assured upper bound on the modeling error. The run time is faster by at least one order of magnitude than EM-based HMM learning algorithms and yields significantly more accurate models than the latter. Here we present a significant improvement of the EC algorithm: the constructive error-controlling (CEC) algorithm. CEC inherits from EC the main idea of minimizing an upper bound on the modeling error but is constructive where EC needs iterations. As a consequence, we obtain further gains in learning speed without loss in modeling accuracy.

  6. Evaluating IMRT and VMAT dose accuracy: Practical examples of failure to detect systematic errors when applying a commonly used metric and action levels

    Energy Technology Data Exchange (ETDEWEB)

    Nelms, Benjamin E. [Canis Lupus LLC, Merrimac, Wisconsin 53561 (United States); Chan, Maria F. [Memorial Sloan-Kettering Cancer Center, Basking Ridge, New Jersey 07920 (United States); Jarry, Geneviève; Lemire, Matthieu [Hôpital Maisonneuve-Rosemont, Montréal, QC H1T 2M4 (Canada); Lowden, John [Indiana University Health - Goshen Hospital, Goshen, Indiana 46526 (United States); Hampton, Carnell [Levine Cancer Institute/Carolinas Medical Center, Concord, North Carolina 28025 (United States); Feygelman, Vladimir [Moffitt Cancer Center, Tampa, Florida 33612 (United States)

    2013-11-15

    Purpose: This study (1) examines a variety of real-world cases where systematic errors were not detected by widely accepted methods for IMRT/VMAT dosimetric accuracy evaluation, and (2) drills-down to identify failure modes and their corresponding means for detection, diagnosis, and mitigation. The primary goal of detailing these case studies is to explore different, more sensitive methods and metrics that could be used more effectively for evaluating accuracy of dose algorithms, delivery systems, and QA devices.Methods: The authors present seven real-world case studies representing a variety of combinations of the treatment planning system (TPS), linac, delivery modality, and systematic error type. These case studies are typical to what might be used as part of an IMRT or VMAT commissioning test suite, varying in complexity. Each case study is analyzed according to TG-119 instructions for gamma passing rates and action levels for per-beam and/or composite plan dosimetric QA. Then, each case study is analyzed in-depth with advanced diagnostic methods (dose profile examination, EPID-based measurements, dose difference pattern analysis, 3D measurement-guided dose reconstruction, and dose grid inspection) and more sensitive metrics (2% local normalization/2 mm DTA and estimated DVH comparisons).Results: For these case studies, the conventional 3%/3 mm gamma passing rates exceeded 99% for IMRT per-beam analyses and ranged from 93.9% to 100% for composite plan dose analysis, well above the TG-119 action levels of 90% and 88%, respectively. However, all cases had systematic errors that were detected only by using advanced diagnostic techniques and more sensitive metrics. The systematic errors caused variable but noteworthy impact, including estimated target dose coverage loss of up to 5.5% and local dose deviations up to 31.5%. Types of errors included TPS model settings, algorithm limitations, and modeling and alignment of QA phantoms in the TPS. Most of the errors were

  7. Implementation of an operator model with error mechanisms for nuclear power plant control room operation

    International Nuclear Information System (INIS)

    Suh, Sang Moon; Cheon, Se Woo; Lee, Yong Hee; Lee, Jung Woon; Park, Young Taek

    1996-01-01

    SACOM(Simulation Analyser with Cognitive Operator Model) is being developed at Korea Atomic Energy Research Institute to simulate human operator's cognitive characteristics during the emergency situations of nuclear power plans. An operator model with error mechanisms has been developed and combined into SACOM to simulate human operator's cognitive information process based on the Rasmussen's decision ladder model. The operational logic for five different cognitive activities (Agents), operator's attentional control (Controller), short-term memory (Blackboard), and long-term memory (Knowledge Base) have been developed and implemented on blackboard architecture. A trial simulation with a scenario for emergency operation has been performed to verify the operational logic. It was found that the operator model with error mechanisms is suitable for the simulation of operator's cognitive behavior in emergency situation

  8. Computer Simulation Tests of Feedback Error Learning Controller with IDM and ISM for Functional Electrical Stimulation in Wrist Joint Control

    Directory of Open Access Journals (Sweden)

    Takashi Watanabe

    2010-01-01

    Full Text Available Feedforward controller would be useful for hybrid Functional Electrical Stimulation (FES system using powered orthotic devices. In this paper, Feedback Error Learning (FEL controller for FES (FEL-FES controller was examined using an inverse statics model (ISM with an inverse dynamics model (IDM to realize a feedforward FES controller. For FES application, the ISM was tested in learning off line using training data obtained by PID control of very slow movements. Computer simulation tests in controlling wrist joint movements showed that the ISM performed properly in positioning task and that IDM learning was improved by using the ISM showing increase of output power ratio of the feedforward controller. The simple ISM learning method and the FEL-FES controller using the ISM would be useful in controlling the musculoskeletal system that has nonlinear characteristics to electrical stimulation and therefore is expected to be useful in applying to hybrid FES system using powered orthotic device.

  9. Generalized perturbation theory error control within PWR core-loading pattern optimization

    International Nuclear Information System (INIS)

    Imbriani, J.S.; Turinsky, P.J.; Kropaczek, D.J.

    1995-01-01

    The fuel management optimization code FORMOSA-P has been developed to determine the family of near-optimum loading patterns for PWR reactors. The code couples the optimization technique of simulated annealing (SA) with a generalized perturbation theory (GPT) model for evaluating core physics characteristics. To ensure the accuracy of the GPT predictions, as well as to maximize the efficient of the SA search, a GPT error control method has been developed

  10. Patterning control strategies for minimum edge placement error in logic devices

    Science.gov (United States)

    Mulkens, Jan; Hanna, Michael; Slachter, Bram; Tel, Wim; Kubis, Michael; Maslow, Mark; Spence, Chris; Timoshkov, Vadim

    2017-03-01

    In this paper we discuss the edge placement error (EPE) for multi-patterning semiconductor manufacturing. In a multi-patterning scheme the creation of the final pattern is the result of a sequence of lithography and etching steps, and consequently the contour of the final pattern contains error sources of the different process steps. We describe the fidelity of the final pattern in terms of EPE, which is defined as the relative displacement of the edges of two features from their intended target position. We discuss our holistic patterning optimization approach to understand and minimize the EPE of the final pattern. As an experimental test vehicle we use the 7-nm logic device patterning process flow as developed by IMEC. This patterning process is based on Self-Aligned-Quadruple-Patterning (SAQP) using ArF lithography, combined with line cut exposures using EUV lithography. The computational metrology method to determine EPE is explained. It will be shown that ArF to EUV overlay, CDU from the individual process steps, and local CD and placement of the individual pattern features, are the important contributors. Based on the error budget, we developed an optimization strategy for each individual step and for the final pattern. Solutions include overlay and CD metrology based on angle resolved scatterometry, scanner actuator control to enable high order overlay corrections and computational lithography optimization to minimize imaging induced pattern placement errors of devices and metrology targets.

  11. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  12. A correction method for systematic error in (1)H-NMR time-course data validated through stochastic cell culture simulation.

    Science.gov (United States)

    Sokolenko, Stanislav; Aucoin, Marc G

    2015-09-04

    The growing ubiquity of metabolomic techniques has facilitated high frequency time-course data collection for an increasing number of applications. While the concentration trends of individual metabolites can be modeled with common curve fitting techniques, a more accurate representation of the data needs to consider effects that act on more than one metabolite in a given sample. To this end, we present a simple algorithm that uses nonparametric smoothing carried out on all observed metabolites at once to identify and correct systematic error from dilution effects. In addition, we develop a simulation of metabolite concentration time-course trends to supplement available data and explore algorithm performance. Although we focus on nuclear magnetic resonance (NMR) analysis in the context of cell culture, a number of possible extensions are discussed. Realistic metabolic data was successfully simulated using a 4-step process. Starting with a set of metabolite concentration time-courses from a metabolomic experiment, each time-course was classified as either increasing, decreasing, concave, or approximately constant. Trend shapes were simulated from generic functions corresponding to each classification. The resulting shapes were then scaled to simulated compound concentrations. Finally, the scaled trends were perturbed using a combination of random and systematic errors. To detect systematic errors, a nonparametric fit was applied to each trend and percent deviations calculated at every timepoint. Systematic errors could be identified at time-points where the median percent deviation exceeded a threshold value, determined by the choice of smoothing model and the number of observed trends. Regardless of model, increasing the number of observations over a time-course resulted in more accurate error estimates, although the improvement was not particularly large between 10 and 20 samples per trend. The presented algorithm was able to identify systematic errors as small

  13. Tracking error constrained robust adaptive neural prescribed performance control for flexible hypersonic flight vehicle

    Directory of Open Access Journals (Sweden)

    Zhonghua Wu

    2017-02-01

    Full Text Available A robust adaptive neural control scheme based on a back-stepping technique is developed for the longitudinal dynamics of a flexible hypersonic flight vehicle, which is able to ensure the state tracking error being confined in the prescribed bounds, in spite of the existing model uncertainties and actuator constraints. Minimal learning parameter technique–based neural networks are used to estimate the model uncertainties; thus, the amount of online updated parameters is largely lessened, and the prior information of the aerodynamic parameters is dispensable. With the utilization of an assistant compensation system, the problem of actuator constraint is overcome. By combining the prescribed performance function and sliding mode differentiator into the neural back-stepping control design procedure, a composite state tracking error constrained adaptive neural control approach is presented, and a new type of adaptive law is constructed. As compared with other adaptive neural control designs for hypersonic flight vehicle, the proposed composite control scheme exhibits not only low-computation property but also strong robustness. Finally, two comparative simulations are performed to demonstrate the robustness of this neural prescribed performance controller.

  14. Error Control Techniques for Efficient Multicast Streaming in UMTS Networks: Proposals andPerformance Evaluation

    Directory of Open Access Journals (Sweden)

    Michele Rossi

    2004-06-01

    Full Text Available In this paper we introduce techniques for efficient multicast video streaming in UMTS networks where a video content has to be conveyed to multiple users in the same cell. Efficient multicast data delivery in UMTS is still an open issue. In particular, suitable solutions have to be found to cope with wireless channel errors, while maintaining both an acceptable channel utilization and a controlled delivery delay over the wireless link between the serving base station and the mobile terminals. Here, we first highlight that standard solutions such as unequal error protection (UEP of the video flow are ineffective in the UMTS systems due to its inherent large feedback delay at the link layer (Radio Link Control, RLC. Subsequently, we propose a local approach to solve errors directly at the UMTS link layer while keeping a reasonably high channel efficiency and saving, as much as possible, system resources. The solution that we propose in this paper is based on the usage of the common channel to serve all the interested users in a cell. In this way, we can save resources with respect to the case where multiple dedicated channels are allocated for every user. In addition to that, we present a hybrid ARQ (HARQ proactive protocol that, at the cost of some redundancy (added to the link layer flow, is able to consistently improve the channel efficiency with respect to the plain ARQ case, by therefore making the use of a single common channel for multicast data delivery feasible. In the last part of the paper we give some hints for future research, by envisioning the usage of the aforementioned error control protocols with suitably encoded video streams.

  15. Systematic Product Development of Control and Diagnosis Functionalities

    Science.gov (United States)

    Stetter, R.; Simundsson, A.

    2017-01-01

    In the scientific field of systematic product development a wide range of helpful methods, guidelines and tools were generated and published in recent years. Until now little special attention was given to design guidelines aiming at supporting product development engineers to design products that allow and support control or diagnosis functions. The general trend to ubiquitous computing and the first development steps towards cognitive systems as well as a general trend toward higher product safety, reliability and reduced total cost of ownership (TCO) in many engineering fields lead to a higher importance of control and diagnosis. In this paper a first attempt is made to formulate general valid guidelines how products can be developed in order to allow and to achieve effective and efficient control and diagnosis. The guidelines are elucidated on the example of an automated guided vehicle. One main concern of this paper is the integration of control and diagnosis functionalities into the development of complete systems which include mechanical, electrical and electronic subsystems. For the development of such systems the strategies, methods and tools of systematic product development have attracted significant attention during the last decades. Today, the functionality and safety of most products is to a large degree dependent on control and diagnosis functionalities. Still, there is comparatively little research concentrating on the integration of the development of these functionalities into the overall product development processes. The paper starts with a background describing Systematic Product Development. The second section deals with the product development of the sample product. The third part clarifies the notions monitoring, control and diagnosis. The following parts summarize some insights and formulate first hypotheses concerning control and diagnosis in Systematic Product Development.

  16. Comercialización del sistema control de errores y versiones de software

    OpenAIRE

    Vargas Caicedo, Hilda Elisa; Rodriguez Loor, Carol Vanessa; Gaibor, Gustavo

    2009-01-01

    El principal objetivo es proveer soluciones en el ámbito de tecnología de información a las necesidades de las empresas para lograr una eficiente administración de los procesos, apoyados en la innovación y optimización continua. Nuestra propuesta, será enfocada a ofrecer un sistema que combina las funciones de control de errores, helpdesk y control de versiones de software a un buen precio. Se indicarán aspectos como las estrategias de venta, de promoción, ingresos y egresos. Aparte de l...

  17. Projective Synchronization of Chaotic Discrete Dynamical Systems via Linear State Error Feedback Control

    Directory of Open Access Journals (Sweden)

    Baogui Xin

    2015-04-01

    Full Text Available A projective synchronization scheme for a kind of n-dimensional discrete dynamical system is proposed by means of a linear feedback control technique. The scheme consists of master and slave discrete dynamical systems coupled by linear state error variables. A kind of novel 3-D chaotic discrete system is constructed, to which the test for chaos is applied. By using the stability principles of an upper or lower triangular matrix, two controllers for achieving projective synchronization are designed and illustrated with the novel systems. Lastly some numerical simulations are employed to validate the effectiveness of the proposed projective synchronization scheme.

  18. Smart photodetector arrays for error control in page-oriented optical memory

    Science.gov (United States)

    Schaffer, Maureen Elizabeth

    1998-12-01

    Page-oriented optical memories (POMs) have been proposed to meet high speed, high capacity storage requirements for input/output intensive computer applications. This technology offers the capability for storage and retrieval of optical data in two-dimensional pages resulting in high throughput data rates. Since currently measured raw bit error rates for these systems fall several orders of magnitude short of industry requirements for binary data storage, powerful error control codes must be adopted. These codes must be designed to take advantage of the two-dimensional memory output. In addition, POMs require an optoelectronic interface to transfer the optical data pages to one or more electronic host systems. Conventional charge coupled device (CCD) arrays can receive optical data in parallel, but the relatively slow serial electronic output of these devices creates a system bottleneck thereby eliminating the POM advantage of high transfer rates. Also, CCD arrays are "unintelligent" interfaces in that they offer little data processing capabilities. The optical data page can be received by two-dimensional arrays of "smart" photo-detector elements that replace conventional CCD arrays. These smart photodetector arrays (SPAs) can perform fast parallel data decoding and error control, thereby providing an efficient optoelectronic interface between the memory and the electronic computer. This approach optimizes the computer memory system by combining the massive parallelism and high speed of optics with the diverse functionality, low cost, and local interconnection efficiency of electronics. In this dissertation we examine the design of smart photodetector arrays for use as the optoelectronic interface for page-oriented optical memory. We review options and technologies for SPA fabrication, develop SPA requirements, and determine SPA scalability constraints with respect to pixel complexity, electrical power dissipation, and optical power limits. Next, we examine data

  19. Basic human error probabilities in advanced MCRs when using soft control

    International Nuclear Information System (INIS)

    Jang, In Seok; Seong, Poong Hyun; Kang, Hyun Gook; Lee, Seung Jun

    2012-01-01

    In a report on one of the renowned HRA methods, Technique for Human Error Rate Prediction (THERP), it is pointed out that 'The paucity of actual data on human performance continues to be a major problem for estimating HEPs and performance times in nuclear power plant (NPP) task'. However, another critical difficulty is that most current HRA databases deal with operation in conventional type of MCRs. With the adoption of new human system interfaces that are based on computer based technologies, the operation environment of MCRs in NPPs has changed. The MCRs including these digital and computer technologies, such as large display panels, computerized procedures, soft controls, and so on, are called advanced MCRs. Because of the different interfaces, different Basic Human Error Probabilities (BHEPs) should be considered in human reliability analyses (HRAs) for advanced MCRs. This study carries out an empirical analysis of human error considering soft controls. The aim of this work is not only to compile a database using the simulator for advanced MCRs but also to compare BHEPs with those of a conventional MCR database

  20. Motivational state controls the prediction error in Pavlovian appetitive-aversive interactions.

    Science.gov (United States)

    Laurent, Vincent; Balleine, Bernard W; Westbrook, R Frederick

    2018-01-01

    Contemporary theories of learning emphasize the role of a prediction error signal in driving learning, but the nature of this signal remains hotly debated. Here, we used Pavlovian conditioning in rats to investigate whether primary motivational and emotional states interact to control prediction error. We initially generated cues that positively or negatively predicted an appetitive food outcome. We then assessed how these cues modulated aversive conditioning when a novel cue was paired with a foot shock. We found that a positive predictor of food enhances, whereas a negative predictor of that same food impairs, aversive conditioning. Critically, we also showed that the enhancement produced by the positive predictor is removed by reducing the value of its associated food. In contrast, the impairment triggered by the negative predictor remains insensitive to devaluation of its associated food. These findings provide compelling evidence that the motivational value attributed to a predicted food outcome can directly control appetitive-aversive interactions and, therefore, that motivational processes can modulate emotional processes to generate the final error term on which subsequent learning is based. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Error characterization and quantum control benchmarking in liquid state NMR using quantum information processing techniques

    Science.gov (United States)

    Laforest, Martin

    Quantum information processing has been the subject of countless discoveries since the early 1990's. It is believed to be the way of the future for computation: using quantum systems permits one to perform computation exponentially faster than on a regular classical computer. Unfortunately, quantum systems that not isolated do not behave well. They tend to lose their quantum nature due to the presence of the environment. If key information is known about the noise present in the system, methods such as quantum error correction have been developed in order to reduce the errors introduced by the environment during a given quantum computation. In order to harness the quantum world and implement the theoretical ideas of quantum information processing and quantum error correction, it is imperative to understand and quantify the noise present in the quantum processor and benchmark the quality of the control over the qubits. Usual techniques to estimate the noise or the control are based on quantum process tomography (QPT), which, unfortunately, demands an exponential amount of resources. This thesis presents work towards the characterization of noisy processes in an efficient manner. The protocols are developed from a purely abstract setting with no system-dependent variables. To circumvent the exponential nature of quantum process tomography, three different efficient protocols are proposed and experimentally verified. The first protocol uses the idea of quantum error correction to extract relevant parameters about a given noise model, namely the correlation between the dephasing of two qubits. Following that is a protocol using randomization and symmetrization to extract the probability that a given number of qubits are simultaneously corrupted in a quantum memory, regardless of the specifics of the error and which qubits are affected. Finally, a last protocol, still using randomization ideas, is developed to estimate the average fidelity per computational gates for

  2. A GPU accelerated and error-controlled solver for the unbounded Poisson equation in three dimensions

    Science.gov (United States)

    Exl, Lukas

    2017-12-01

    An efficient solver for the three dimensional free-space Poisson equation is presented. The underlying numerical method is based on finite Fourier series approximation. While the error of all involved approximations can be fully controlled, the overall computation error is driven by the convergence of the finite Fourier series of the density. For smooth and fast-decaying densities the proposed method will be spectrally accurate. The method scales with O(N log N) operations, where N is the total number of discretization points in the Cartesian grid. The majority of the computational costs come from fast Fourier transforms (FFT), which makes it ideal for GPU computation. Several numerical computations on CPU and GPU validate the method and show efficiency and convergence behavior. Tests are performed using the Vienna Scientific Cluster 3 (VSC3). A free MATLAB implementation for CPU and GPU is provided to the interested community.

  3. Fast and error-resilient coherent control in an atomic vapor

    Science.gov (United States)

    He, Yizun; Wang, Mengbing; Zhao, Jian; Qiu, Liyang; Wang, Yuzhuo; Fang, Yami; Zhao, Kaifeng; Wu, Saijun

    2017-04-01

    Nanosecond chirped pulses from an optical arbitrary waveform generator is applied to both invert and coherently split the D1 line population of potassium vapor within a laser focal volume of 2X105 μ m3. The inversion fidelity of f>96%, mainly limited by spontaneous emission during the nanosecond pulse, is inferred from both probe light transmission and superfluorescence emission. The nearly perfect inversion is uniformly achieved for laser intensity varying over an order of magnitude, and is tolerant to detuning error of more than 1000 times the D1 transition linewidth. We further demonstrate enhanced intensity error resilience with multiple chirped pulses and ``universal composite pulses''. This fast and robust coherent control technique should find wide applications in the field of quantum optics, laser cooling, and atom interferometry. This work is supported by National Key Research Program of China under Grant No. 2016YFA0302000, and NNSFC under Grant No. 11574053.

  4. Variations in Static Force Control and Motor Unit Behavior with Error Amplification Feedback in the Elderly

    Directory of Open Access Journals (Sweden)

    Yi-Ching Chen

    2017-11-01

    Full Text Available Error amplification (EA feedback is a promising approach to advance visuomotor skill. As error detection and visuomotor processing at short time scales decline with age, this study examined whether older adults could benefit from EA feedback that included higher-frequency information to guide a force-tracking task. Fourteen young and 14 older adults performed low-level static isometric force-tracking with visual guidance of typical visual feedback and EA feedback containing augmented high-frequency errors. Stabilogram diffusion analysis was used to characterize force fluctuation dynamics. Also, the discharge behaviors of motor units and pooled motor unit coherence were assessed following the decomposition of multi-channel surface electromyography (EMG. EA produced different behavioral and neurophysiological impacts on young and older adults. Older adults exhibited inferior task accuracy with EA feedback than with typical visual feedback, but not young adults. Although stabilogram diffusion analysis revealed that EA led to a significant decrease in critical time points for both groups, EA potentiated the critical point of force fluctuations <ΔFc2>, short-term effective diffusion coefficients (Ds, and short-term exponent scaling only for the older adults. Moreover, in older adults, EA added to the size of discharge variability of motor units and discharge regularity of cumulative discharge rate, but suppressed the pooled motor unit coherence in the 13–35 Hz band. Virtual EA alters the strategic balance between open-loop and closed-loop controls for force-tracking. Contrary to expectations, the prevailing use of closed-loop control with EA that contained high-frequency error information enhanced the motor unit discharge variability and undermined the force steadiness in the older group, concerning declines in physiological complexity in the neurobehavioral system and the common drive to the motoneuronal pool against force destabilization.

  5. Systematic Analysis of Video Data from Different Human-Robot Interaction Studies: A Categorisation of Social Signals During Error Situations

    OpenAIRE

    Manuel eGiuliani; Nicole eMirnig; Gerald eStollnberger; Susanne eStadler; Roland eBuchner; Manfred eTscheligi

    2015-01-01

    Human?robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human?robot interaction experiments. For that, we analyzed 201 videos of five human?robot interaction user studies with varying tasks from four independent projects. The analysis shows tha...

  6. Sensorless SPMSM Position Estimation Using Position Estimation Error Suppression Control and EKF in Wide Speed Range

    Directory of Open Access Journals (Sweden)

    Zhanshan Wang

    2014-01-01

    Full Text Available The control of a high performance alternative current (AC motor drive under sensorless operation needs the accurate estimation of rotor position. In this paper, one method of accurately estimating rotor position by using both motor complex number model based position estimation and position estimation error suppression proportion integral (PI controller is proposed for the sensorless control of the surface permanent magnet synchronous motor (SPMSM. In order to guarantee the accuracy of rotor position estimation in the flux-weakening region, one scheme of identifying the permanent magnet flux of SPMSM by extended Kalman filter (EKF is also proposed, which formed the effective combination method to realize the sensorless control of SPMSM with high accuracy. The simulation results demonstrated the validity and feasibility of the proposed position/speed estimation system.

  7. Effects of Systematic and Random Errors on the Retrieval of Particle Microphysical Properties from Multiwavelength Lidar Measurements Using Inversion with Regularization

    Science.gov (United States)

    Ramirez, Daniel Perez; Whiteman, David N.; Veselovskii, Igor; Kolgotin, Alexei; Korenskiy, Michael; Alados-Arboledas, Lucas

    2013-01-01

    In this work we study the effects of systematic and random errors on the inversion of multiwavelength (MW) lidar data using the well-known regularization technique to obtain vertically resolved aerosol microphysical properties. The software implementation used here was developed at the Physics Instrumentation Center (PIC) in Troitsk (Russia) in conjunction with the NASA/Goddard Space Flight Center. Its applicability to Raman lidar systems based on backscattering measurements at three wavelengths (355, 532 and 1064 nm) and extinction measurements at two wavelengths (355 and 532 nm) has been demonstrated widely. The systematic error sensitivity is quantified by first determining the retrieved parameters for a given set of optical input data consistent with three different sets of aerosol physical parameters. Then each optical input is perturbed by varying amounts and the inversion is repeated. Using bimodal aerosol size distributions, we find a generally linear dependence of the retrieved errors in the microphysical properties on the induced systematic errors in the optical data. For the retrievals of effective radius, number/surface/volume concentrations and fine-mode radius and volume, we find that these results are not significantly affected by the range of the constraints used in inversions. But significant sensitivity was found to the allowed range of the imaginary part of the particle refractive index. Our results also indicate that there exists an additive property for the deviations induced by the biases present in the individual optical data. This property permits the results here to be used to predict deviations in retrieved parameters when multiple input optical data are biased simultaneously as well as to study the influence of random errors on the retrievals. The above results are applied to questions regarding lidar design, in particular for the spaceborne multiwavelength lidar under consideration for the upcoming ACE mission.

  8. Edge profile analysis of Joint European Torus (JET) Thomson scattering data: Quantifying the systematic error due to edge localised mode synchronisation.

    Science.gov (United States)

    Leyland, M J; Beurskens, M N A; Flanagan, J C; Frassinetti, L; Gibson, K J; Kempenaars, M; Maslov, M; Scannell, R

    2016-01-01

    The Joint European Torus (JET) high resolution Thomson scattering (HRTS) system measures radial electron temperature and density profiles. One of the key capabilities of this diagnostic is measuring the steep pressure gradient, termed the pedestal, at the edge of JET plasmas. The pedestal is susceptible to limiting instabilities, such as Edge Localised Modes (ELMs), characterised by a periodic collapse of the steep gradient region. A common method to extract the pedestal width, gradient, and height, used on numerous machines, is by performing a modified hyperbolic tangent (mtanh) fit to overlaid profiles selected from the same region of the ELM cycle. This process of overlaying profiles, termed ELM synchronisation, maximises the number of data points defining the pedestal region for a given phase of the ELM cycle. When fitting to HRTS profiles, it is necessary to incorporate the diagnostic radial instrument function, particularly important when considering the pedestal width. A deconvolved fit is determined by a forward convolution method requiring knowledge of only the instrument function and profiles. The systematic error due to the deconvolution technique incorporated into the JET pedestal fitting tool has been documented by Frassinetti et al. [Rev. Sci. Instrum. 83, 013506 (2012)]. This paper seeks to understand and quantify the systematic error introduced to the pedestal width due to ELM synchronisation. Synthetic profiles, generated with error bars and point-to-point variation characteristic of real HRTS profiles, are used to evaluate the deviation from the underlying pedestal width. We find on JET that the ELM synchronisation systematic error is negligible in comparison to the statistical error when assuming ten overlaid profiles (typical for a pre-ELM fit to HRTS profiles). This confirms that fitting a mtanh to ELM synchronised profiles is a robust and practical technique for extracting the pedestal structure.

  9. How the credit assignment problems in motor control could be solved after the cerebellum predicts increases in error.

    Science.gov (United States)

    Verduzco-Flores, Sergio O; O'Reilly, Randall C

    2015-01-01

    We present a cerebellar architecture with two main characteristics. The first one is that complex spikes respond to increases in sensory errors. The second one is that cerebellar modules associate particular contexts where errors have increased in the past with corrective commands that stop the increase in error. We analyze our architecture formally and computationally for the case of reaching in a 3D environment. In the case of motor control, we show that there are synergies of this architecture with the Equilibrium-Point hypothesis, leading to novel ways to solve the motor error and distal learning problems. In particular, the presence of desired equilibrium lengths for muscles provides a way to know when the error is increasing, and which corrections to apply. In the context of Threshold Control Theory and Perceptual Control Theory we show how to extend our model so it implements anticipative corrections in cascade control systems that span from muscle contractions to cognitive operations.

  10. How the credit assignment problems in motor control could be solved after the cerebellum predicts increases in error

    Directory of Open Access Journals (Sweden)

    Sergio Oscar Verduzco-Flores

    2015-03-01

    Full Text Available We present a cerebellar architecture with two main characteristics. The first one is that complex spikes respond to increases in sensory errors. The second one is that cerebellar modules associate particular contexts where errors have increased in the past with corrective commands that stop the increase in error. We analyze our architecture formally and computationally for the case of reaching in a 3D environment. In the case of motor control, we show that there are synergies of this architecture with the Equilibrium-Point hypothesis, leading to novel ways to solve the motor error and distal learning problems. In particular, the presence of desired equilibrium lengths for muscles provides a way to know when the error is increasing, and which corrections to apply. In the context of Threshold Control Theory and Perceptual Control Theory we show how to extend our model so it implements anticipative corrections in cascade control systems that span from muscle contractions to cognitive operations.

  11. Current error vector based prediction control of the section winding permanent magnet linear synchronous motor

    Energy Technology Data Exchange (ETDEWEB)

    Hong Junjie, E-mail: hongjjie@mail.sysu.edu.cn [School of Engineering, Sun Yat-Sen University, Guangzhou 510006 (China); Li Liyi, E-mail: liliyi@hit.edu.cn [Dept. Electrical Engineering, Harbin Institute of Technology, Harbin 150000 (China); Zong Zhijian; Liu Zhongtu [School of Engineering, Sun Yat-Sen University, Guangzhou 510006 (China)

    2011-10-15

    Highlights: {yields} The structure of the permanent magnet linear synchronous motor (SW-PMLSM) is new. {yields} A new current control method CEVPC is employed in this motor. {yields} The sectional power supply method is different to the others and effective. {yields} The performance gets worse with voltage and current limitations. - Abstract: To include features such as greater thrust density, higher efficiency without reducing the thrust stability, this paper proposes a section winding permanent magnet linear synchronous motor (SW-PMLSM), whose iron core is continuous, whereas winding is divided. The discrete system model of the motor is derived. With the definition of the current error vector and selection of the value function, the theory of the current error vector based prediction control (CEVPC) for the motor currents is explained clearly. According to the winding section feature, the motion region of the mover is divided into five zones, in which the implementation of the current predictive control method is proposed. Finally, the experimental platform is constructed and experiments are carried out. The results show: the current control effect has good dynamic response, and the thrust on the mover remains constant basically.

  12. Quality controls in integrative approaches to detect errors and inconsistencies in biological databases

    Directory of Open Access Journals (Sweden)

    Ghisalberti Giorgio

    2010-12-01

    Full Text Available Numerous biomolecular data are available, but they are scattered in many databases and only some of them are curated by experts. Most available data are computationally derived and include errors and inconsistencies. Effective use of available data in order to derive new knowledge hence requires data integration and quality improvement. Many approaches for data integration have been proposed. Data warehousing seams to be the most adequate when comprehensive analysis of integrated data is required. This makes it the most suitable also to implement comprehensive quality controls on integrated data. We previously developed GFINDer (http://www.bioinformatics.polimi.it/GFINDer/, a web system that supports scientists in effectively using available information. It allows comprehensive statistical analysis and mining of functional and phenotypic annotations of gene lists, such as those identified by high-throughput biomolecular experiments. GFINDer backend is composed of a multi-organism genomic and proteomic data warehouse (GPDW. Within the GPDW, several controlled terminologies and ontologies, which describe gene and gene product related biomolecular processes, functions and phenotypes, are imported and integrated, together with their associations with genes and proteins of several organisms. In order to ease maintaining updated the GPDW and to ensure the best possible quality of data integrated in subsequent updating of the data warehouse, we developed several automatic procedures. Within them, we implemented numerous data quality control techniques to test the integrated data for a variety of possible errors and inconsistencies. Among other features, the implemented controls check data structure and completeness, ontological data consistency, ID format and evolution, unexpected data quantification values, and consistency of data from single and multiple sources. We use the implemented controls to analyze the quality of data available from several

  13. Detecting errors and anomalies in computerized materials control and accountability databases

    International Nuclear Information System (INIS)

    Whiteson, R.; Hench, K.; Yarbro, T.; Baumgart, C.

    1998-01-01

    The Automated MC and A Database Assessment project is aimed at improving anomaly and error detection in materials control and accountability (MC and A) databases and increasing confidence in the data that they contain. Anomalous data resulting in poor categorization of nuclear material inventories greatly reduces the value of the database information to users. Therefore it is essential that MC and A data be assessed periodically for anomalies or errors. Anomaly detection can identify errors in databases and thus provide assurance of the integrity of data. An expert system has been developed at Los Alamos National Laboratory that examines these large databases for anomalous or erroneous data. For several years, MC and A subject matter experts at Los Alamos have been using this automated system to examine the large amounts of accountability data that the Los Alamos Plutonium Facility generates. These data are collected and managed by the Material Accountability and Safeguards System, a near-real-time computerized nuclear material accountability and safeguards system. This year they have expanded the user base, customizing the anomaly detector for the varying requirements of different groups of users. This paper describes the progress in customizing the expert systems to the needs of the users of the data and reports on their results

  14. The Neural-fuzzy Thermal Error Compensation Controller on CNC Machining Center

    Science.gov (United States)

    Tseng, Pai-Chung; Chen, Shen-Len

    The geometric errors and structural thermal deformation are factors that influence the machining accuracy of Computer Numerical Control (CNC) machining center. Therefore, researchers pay attention to thermal error compensation technologies on CNC machine tools. Some real-time error compensation techniques have been successfully demonstrated in both laboratories and industrial sites. The compensation results still need to be enhanced. In this research, the neural-fuzzy theory has been conducted to derive a thermal prediction model. An IC-type thermometer has been used to detect the heat sources temperature variation. The thermal drifts are online measured by a touch-triggered probe with a standard bar. A thermal prediction model is then derived by neural-fuzzy theory based on the temperature variation and the thermal drifts. A Graphic User Interface (GUI) system is also built to conduct the user friendly operation interface with Insprise C++ Builder. The experimental results show that the thermal prediction model developed by neural-fuzzy theory methodology can improve machining accuracy from 80µm to 3µm. Comparison with the multi-variable linear regression analysis the compensation accuracy is increased from ±10µm to ±3µm.

  15. Systematic review of control groups in nutrition education intervention research.

    Science.gov (United States)

    Byrd-Bredbenner, Carol; Wu, FanFan; Spaccarotella, Kim; Quick, Virginia; Martin-Biggers, Jennifer; Zhang, Yingting

    2017-07-11

    Well-designed research trials are critical for determining the efficacy and effectiveness of nutrition education interventions. To determine whether behavioral and/or cognition changes can be attributed to an intervention, the experimental design must include a control or comparison condition against which outcomes from the experimental group can be compared. Despite the impact different types of control groups can have on study outcomes, the treatment provided to participants in the control condition has received limited attention in the literature. A systematic review of control groups in nutrition education interventions was conducted to better understand how control conditions are described in peer-reviewed journal articles compared with experimental conditions. To be included in the systematic review, articles had to be indexed in CINAHL, PubMed, PsycINFO, WoS, and/or ERIC and report primary research findings of controlled nutrition education intervention trials conducted in the United States with free-living consumer populations and published in English between January 2005 and December 2015. Key elements extracted during data collection included treatment provided to the experimental and control groups (e.g., overall intervention content, tailoring methods, delivery mode, format, duration, setting, and session descriptions, and procedures for standardizing, fidelity of implementation, and blinding); rationale for control group type selected; sample size and attrition; and theoretical foundation. The search yielded 43 publications; about one-third of these had an inactive control condition, which is considered a weak study design. Nearly two-thirds of reviewed studies had an active control condition considered a stronger research design; however, many failed to report one or more key elements of the intervention, especially for the control condition. None of the experimental and control group treatments were sufficiently detailed to permit replication of the

  16. Reliable methods for computer simulation error control and a posteriori estimates

    CERN Document Server

    Neittaanmäki, P

    2004-01-01

    Recent decades have seen a very rapid success in developing numerical methods based on explicit control over approximation errors. It may be said that nowadays a new direction is forming in numerical analysis, the main goal of which is to develop methods ofreliable computations. In general, a reliable numerical method must solve two basic problems: (a) generate a sequence of approximations that converges to a solution and (b) verify the accuracy of these approximations. A computer code for such a method must consist of two respective blocks: solver and checker.In this book, we are chie

  17. A queueing model for error control of partial buffer sharing in ATM

    Directory of Open Access Journals (Sweden)

    Ahn Boo Yong

    1999-01-01

    Full Text Available We model the error control of the partial buffer sharing of ATM by a queueing system M 1 , M 2 / G / 1 / K + 1 with threshold and instantaneous Bernoulli feedback. We first derive the system equations and develop a recursive method to compute the loss probabilities at an arbitrary time epoch. We then build an approximation scheme to compute the mean waiting time of each class of cells. An algorithm is developed for finding the optimal threshold and queue capacity for a given quality of service.

  18. Brezzi-Pitkaranta stabilization and a priori error analysis for the Stokes Control

    Directory of Open Access Journals (Sweden)

    Aytekin Cibik

    2016-12-01

    Full Text Available In this study, we consider a Brezzi-Pitkaranta stabilization scheme for the optimal control problem governed by stationary Stokes equation, using a P1-P1 interpolation for velocity and pressure. We express the stabilization as extra terms added to the discrete variational form of the problem.  We first prove the stability of the finite element discretization of the problem. Then, we derive a priori error bounds for each variable and present a numerical example to show the effectiveness of the stabilization clearly.

  19. Automated systems help prevent operator error during [reactor] I and C [instrumentation and control] testing

    International Nuclear Information System (INIS)

    Courcoux, R.

    1989-01-01

    On a nuclear steam supply system, even a minor failure can involve actuation of the whole reactor protection system (RPS). To reduce the likelihood of human error leading to unwanted trips during the maintenance of instrumentation and control systems, Framatome has been developing and installing various automated testing systems. Such automated systems are particularly helpful when periodic tests with a potential for RPS actuation have to be carried out, or when the test is on the critical path for the refuelling outage. The Sensitive Channel Programme described is an example of the sort of work that has been done. (author)

  20. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    International Nuclear Information System (INIS)

    Bokanowski, Olivier; Picarelli, Athena; Zidani, Hasnaa

    2015-01-01

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach

  1. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    Energy Technology Data Exchange (ETDEWEB)

    Bokanowski, Olivier, E-mail: boka@math.jussieu.fr [Laboratoire Jacques-Louis Lions, Université Paris-Diderot (Paris 7) UFR de Mathématiques - Bât. Sophie Germain (France); Picarelli, Athena, E-mail: athena.picarelli@inria.fr [Projet Commands, INRIA Saclay & ENSTA ParisTech (France); Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr [Unité de Mathématiques appliquées (UMA), ENSTA ParisTech (France)

    2015-02-15

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.

  2. Adaptive control of nonlinear system using online error minimum neural networks.

    Science.gov (United States)

    Jia, Chao; Li, Xiaoli; Wang, Kang; Ding, Dawei

    2016-11-01

    In this paper, a new learning algorithm named OEM-ELM (Online Error Minimized-ELM) is proposed based on ELM (Extreme Learning Machine) neural network algorithm and the spreading of its main structure. The core idea of this OEM-ELM algorithm is: online learning, evaluation of network performance, and increasing of the number of hidden nodes. It combines the advantages of OS-ELM and EM-ELM, which can improve the capability of identification and avoid the redundancy of networks. The adaptive control based on the proposed algorithm OEM-ELM is set up which has stronger adaptive capability to the change of environment. The adaptive control of chemical process Continuous Stirred Tank Reactor (CSTR) is also given for application. The simulation results show that the proposed algorithm with respect to the traditional ELM algorithm can avoid network redundancy and improve the control performance greatly. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Do People With Severe Traumatic Brain Injury Benefit From Making Errors? A Randomized Controlled Trial of Error-Based and Errorless Learning.

    Science.gov (United States)

    Ownsworth, Tamara; Fleming, Jennifer; Tate, Robyn; Beadle, Elizabeth; Griffin, Janelle; Kendall, Melissa; Schmidt, Julia; Lane-Brown, Amanda; Chevignard, Mathilde; Shum, David H K

    2017-12-01

    Errorless learning (ELL) and error-based learning (EBL) are commonly used approaches to rehabilitation for people with traumatic brain injury (TBI). However, it is unknown whether making errors is beneficial in the learning process to promote skills generalization after severe TBI. To compare the efficacy of ELL and EBL for improving skills generalization, self-awareness, behavioral competency, and psychosocial functioning after severe TBI. A total of 54 adults (79% male; mean age = 38.0 years, SD = 13.4) with severe TBI were randomly allocated to ELL or EBL and received 8 × 1.5-hour therapy sessions that involved meal preparation and other goal-directed activities. The primary outcome was total errors on the Cooking Task (near-transfer). Secondary outcome measures included the Zoo Map Test (far-transfer), Awareness Questionnaire, Patient Competency Rating Scale, Sydney Psychosocial Reintegration Scale, and Care and Needs Scale. Controlling for baseline performance and years of education, participants in the EBL group made significantly fewer errors at postintervention (mean = 36.25; 95% CI = 32.5-40.0) than ELL participants (mean = 42.57; 95% CI = 38.8-46.3). EBL participants also demonstrated greater self-awareness and behavioral competency at postintervention than ELL participants ( P .05), or at the 6-month follow-up assessment. EBL was found to be more effective than ELL for enhancing skills generalization on a task related to training and improving self-awareness and behavioral competency.

  4. Interpretations of systematic errors in the NCEP Climate Forecast System at lead times of 2, 4, 8, ..., 256 days

    Directory of Open Access Journals (Sweden)

    Siwon Song

    2012-09-01

    Full Text Available The climatology of mean bias errors (relative to 1-day forecasts was examined in a 20-year hindcast set from version 1 of the Climate Forecast System (CFS, for forecast lead times of 2, 4, 8, 16, ... 256 days, verifying in different seasons. Results mostly confirm the simple expectation that atmospheric model biases should be evident at short lead (2–4 days, while soil moisture errors develop over days-weeks and ocean errors emerge over months. A further simplification is also evident: surface temperature bias patterns have nearly fixed geographical structure, growing with different time scales over land and ocean. The geographical pattern has mostly warm and dry biases over land and cool bias over the oceans, with two main exceptions: (1 deficient stratocumulus clouds cause warm biases in eastern subtropical oceans, and (2 high latitude land is too cold in boreal winter. Further study of the east Pacific cold tongue-Intertropical Convergence Zone (ITCZ complex shows a possible interaction between a rapidly-expressed atmospheric model bias (poleward shift of deep convection beginning at day 2 and slow ocean dynamics (erroneously cold upwelling along the equator in leads > 1 month. Further study of the high latitude land cold bias shows that it is a thermal wind balance aspect of the deep polar vortex, not just a near-surface temperature error under the wintertime inversion, suggesting that its development time scale of weeks to months may involve long timescale processes in the atmosphere, not necessarily in the land model. Winter zonal wind errors are small in magnitude, but a refractive index map shows that this can cause modest errors in Rossby wave ducting. Finally, as a counterpoint to our initial expectations about error growth, a case of non-monotonic error growth is shown: velocity potential bias grows with lead on a time scale of weeks, then decays over months. It is hypothesized that compensations between land and ocean errors may

  5. Is blood pressure reduction a valid surrogate endpoint for stroke prevention? an analysis incorporating a systematic review of randomised controlled trials, a by-trial weighted errors-in-variables regression, the surrogate threshold effect (STE and the biomarker-surrogacy (BioSurrogate evaluation schema (BSES

    Directory of Open Access Journals (Sweden)

    Lassere Marissa N

    2012-03-01

    Full Text Available Abstract Background Blood pressure is considered to be a leading example of a valid surrogate endpoint. The aims of this study were to (i formally evaluate systolic and diastolic blood pressure reduction as a surrogate endpoint for stroke prevention and (ii determine what blood pressure reduction would predict a stroke benefit. Methods We identified randomised trials of at least six months duration comparing any pharmacologic anti-hypertensive treatment to placebo or no treatment, and reporting baseline blood pressure, on-trial blood pressure, and fatal and non-fatal stroke. Trials with fewer than five strokes in at least one arm were excluded. Errors-in-variables weighted least squares regression modelled the reduction in stroke as a function of systolic blood pressure reduction and diastolic blood pressure reduction respectively. The lower 95% prediction band was used to determine the minimum systolic blood pressure and diastolic blood pressure difference, the surrogate threshold effect (STE, below which there would be no predicted stroke benefit. The STE was used to generate the surrogate threshold effect proportion (STEP, a surrogacy metric, which with the R-squared trial-level association was used to evaluate blood pressure as a surrogate endpoint for stroke using the Biomarker-Surrogacy Evaluation Schema (BSES3. Results In 18 qualifying trials representing all pharmacologic drug classes of antihypertensives, assuming a reliability coefficient of 0.9, the surrogate threshold effect for a stroke benefit was 7.1 mmHg for systolic blood pressure and 2.4 mmHg for diastolic blood pressure. The trial-level association was 0.41 and 0.64 and the STEP was 66% and 78% for systolic and diastolic blood pressure respectively. The STE and STEP were more robust to measurement error in the independent variable than R-squared trial-level associations. Using the BSES3, assuming a reliability coefficient of 0.9, systolic blood pressure was a B + grade and

  6. Is blood pressure reduction a valid surrogate endpoint for stroke prevention? an analysis incorporating a systematic review of randomised controlled trials, a by-trial weighted errors-in-variables regression, the surrogate threshold effect (STE) and the biomarker-surrogacy (BioSurrogate) evaluation schema (BSES)

    Science.gov (United States)

    2012-01-01

    Background Blood pressure is considered to be a leading example of a valid surrogate endpoint. The aims of this study were to (i) formally evaluate systolic and diastolic blood pressure reduction as a surrogate endpoint for stroke prevention and (ii) determine what blood pressure reduction would predict a stroke benefit. Methods We identified randomised trials of at least six months duration comparing any pharmacologic anti-hypertensive treatment to placebo or no treatment, and reporting baseline blood pressure, on-trial blood pressure, and fatal and non-fatal stroke. Trials with fewer than five strokes in at least one arm were excluded. Errors-in-variables weighted least squares regression modelled the reduction in stroke as a function of systolic blood pressure reduction and diastolic blood pressure reduction respectively. The lower 95% prediction band was used to determine the minimum systolic blood pressure and diastolic blood pressure difference, the surrogate threshold effect (STE), below which there would be no predicted stroke benefit. The STE was used to generate the surrogate threshold effect proportion (STEP), a surrogacy metric, which with the R-squared trial-level association was used to evaluate blood pressure as a surrogate endpoint for stroke using the Biomarker-Surrogacy Evaluation Schema (BSES3). Results In 18 qualifying trials representing all pharmacologic drug classes of antihypertensives, assuming a reliability coefficient of 0.9, the surrogate threshold effect for a stroke benefit was 7.1 mmHg for systolic blood pressure and 2.4 mmHg for diastolic blood pressure. The trial-level association was 0.41 and 0.64 and the STEP was 66% and 78% for systolic and diastolic blood pressure respectively. The STE and STEP were more robust to measurement error in the independent variable than R-squared trial-level associations. Using the BSES3, assuming a reliability coefficient of 0.9, systolic blood pressure was a B + grade and diastolic blood pressure

  7. Community effectiveness of copepods for dengue vector control: systematic review.

    Science.gov (United States)

    Lazaro, A; Han, W W; Manrique-Saide, P; George, L; Velayudhan, R; Toledo, J; Runge Ranzinger, S; Horstick, O

    2015-06-01

    Vector control remains the only available method for primary prevention of dengue. Several interventions exist for dengue vector control, with limited evidence of their efficacy and community effectiveness. This systematic review compiles and analyses the existing global evidence for community effectiveness of copepods for dengue vector control. The systematic review follows the PRISMA statement, searching six relevant databases. Applying all inclusion and exclusion criteria, 11 articles were included. There is evidence that cyclopoid copepods (Mesocyclops spp.) could potentially be an effective vector control option, as shown in five community effectiveness studies in Vietnam. This includes long-term effectiveness for larval and adult control of Ae. aegypti, as well as dengue incidence. However, this success has so far not been replicated elsewhere (six studies, three community effectiveness studies--Costa Rica, Mexico and USA, and three studies analysing both efficacy and community effectiveness--Honduras, Laos and USA), probably due to community participation, environmental and/or biological factors. Judging by the quality of existing studies, there is a lack of good study design, data quality and appropriate statistics. There is limited evidence for the use of cyclopoid copepods as a single intervention. There are very few studies, and more are needed in other communities and environments. Clear best practice guidelines for the methodology of entomological studies should be developed. © 2015 John Wiley & Sons Ltd.

  8. Fuzzy Adaptation Algorithms’ Control for Robot Manipulators with Uncertainty Modelling Errors

    Directory of Open Access Journals (Sweden)

    Yongqing Fan

    2018-01-01

    Full Text Available A novel fuzzy control scheme with adaptation algorithms is developed for robot manipulators’ system. At the beginning, one adjustable parameter is introduced in the fuzzy logic system, the robot manipulators system with uncertain nonlinear terms as the master device and a reference model dynamic system as the slave robot system. To overcome the limitations such as online learning computation burden and logic structure in conventional fuzzy logic systems, a parameter should be used in fuzzy logic system, which composes fuzzy logic system with updated parameter laws, and can be formed for a new fashioned adaptation algorithms controller. The error closed-loop dynamical system can be stabilized based on Lyapunov analysis, the number of online learning computation burdens can be reduced greatly, and the different kinds of fuzzy logic systems with fuzzy rules or without any fuzzy rules are also suited. Finally, effectiveness of the proposed approach has been shown in simulation example.

  9. An empirical study on the human error recovery failure probability when using soft controls in NPP advanced MCRs

    International Nuclear Information System (INIS)

    Jang, Inseok; Kim, Ar Ryum; Jung, Wondea; Seong, Poong Hyun

    2014-01-01

    Highlights: • Many researchers have tried to understand human recovery process or step. • Modeling human recovery process is not sufficient to be applied to HRA. • The operation environment of MCRs in NPPs has changed by adopting new HSIs. • Recovery failure probability in a soft control operation environment is investigated. • Recovery failure probability here would be important evidence for expert judgment. - Abstract: It is well known that probabilistic safety assessments (PSAs) today consider not just hardware failures and environmental events that can impact upon risk, but also human error contributions. Consequently, the focus on reliability and performance management has been on the prevention of human errors and failures rather than the recovery of human errors. However, the recovery of human errors is as important as the prevention of human errors and failures for the safe operation of nuclear power plants (NPPs). For this reason, many researchers have tried to find a human recovery process or step. However, modeling the human recovery process is not sufficient enough to be applied to human reliability analysis (HRA), which requires human error and recovery probabilities. In this study, therefore, human error recovery failure probabilities based on predefined human error modes were investigated by conducting experiments in the operation mockup of advanced/digital main control rooms (MCRs) in NPPs. To this end, 48 subjects majoring in nuclear engineering participated in the experiments. In the experiments, using the developed accident scenario based on tasks from the standard post trip action (SPTA), the steam generator tube rupture (SGTR), and predominant soft control tasks, which are derived from the loss of coolant accident (LOCA) and the excess steam demand event (ESDE), all error detection and recovery data based on human error modes were checked with the performance sheet and the statistical analysis of error recovery/detection was then

  10. Factors controlling volume errors through 2D gully erosion assessment: guidelines for optimal survey design

    Science.gov (United States)

    Castillo, Carlos; Pérez, Rafael

    2017-04-01

    The assessment of gully erosion volumes is essential for the quantification of soil losses derived from this relevant degradation process. Traditionally, 2D and 3D approaches has been applied for this purpose (Casalí et al., 2006). Although innovative 3D approaches have recently been proposed for gully volume quantification, a renewed interest can be found in literature regarding the useful information that cross-section analysis still provides in gully erosion research. Moreover, the application of methods based on 2D approaches can be the most cost-effective approach in many situations such as preliminary studies with low accuracy requirements or surveys under time or budget constraints. The main aim of this work is to examine the key factors controlling volume error variability in 2D gully assessment by means of a stochastic experiment involving a Monte Carlo analysis over synthetic gully profiles in order to 1) contribute to a better understanding of the drivers and magnitude of gully erosion 2D-surveys uncertainty and 2) provide guidelines for optimal survey designs. Owing to the stochastic properties of error generation in 2D volume assessment, a statistical approach was followed to generate a large and significant set of gully reach configurations to evaluate quantitatively the influence of the main factors controlling the uncertainty of the volume assessment. For this purpose, a simulation algorithm in Matlab® code was written, involving the following stages: - Generation of synthetic gully area profiles with different degrees of complexity (characterized by the cross-section variability) - Simulation of field measurements characterised by a survey intensity and the precision of the measurement method - Quantification of the volume error uncertainty as a function of the key factors In this communication we will present the relationships between volume error and the studied factors and propose guidelines for 2D field surveys based on the minimal survey

  11. Novel prescribed performance neural control of a flexible air-breathing hypersonic vehicle with unknown initial errors.

    Science.gov (United States)

    Bu, Xiangwei; Wu, Xiaoyan; Zhu, Fujing; Huang, Jiaqi; Ma, Zhen; Zhang, Rui

    2015-11-01

    A novel prescribed performance neural controller with unknown initial errors is addressed for the longitudinal dynamic model of a flexible air-breathing hypersonic vehicle (FAHV) subject to parametric uncertainties. Different from traditional prescribed performance control (PPC) requiring that the initial errors have to be known accurately, this paper investigates the tracking control without accurate initial errors via exploiting a new performance function. A combined neural back-stepping and minimal learning parameter (MLP) technology is employed for exploring a prescribed performance controller that provides robust tracking of velocity and altitude reference trajectories. The highlight is that the transient performance of velocity and altitude tracking errors is satisfactory and the computational load of neural approximation is low. Finally, numerical simulation results from a nonlinear FAHV model demonstrate the efficacy of the proposed strategy. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  12. SU-D-BRA-03: Analysis of Systematic Errors with 2D/3D Image Registration for Target Localization and Treatment Delivery in Stereotactic Radiosurgery

    International Nuclear Information System (INIS)

    Xu, H; Chetty, I; Wen, N

    2016-01-01

    Purpose: Determine systematic deviations between 2D/3D and 3D/3D image registrations with six degrees of freedom (6DOF) for various imaging modalities and registration algorithms on the Varian Edge Linac. Methods: The 6DOF systematic errors were assessed by comparing automated 2D/3D (kV/MV vs. CT) with 3D/3D (CBCT vs. CT) image registrations from different imaging pairs, CT slice thicknesses, couch angles, similarity measures, etc., using a Rando head and a pelvic phantom. The 2D/3D image registration accuracy was evaluated at different treatment sites (intra-cranial and extra-cranial) by statistically analyzing 2D/3D pre-treatment verification against 3D/3D localization of 192 Stereotactic Radiosurgery/Stereotactic Body Radiation Therapy treatment fractions for 88 patients. Results: The systematic errors of 2D/3D image registration using kV-kV, MV-kV and MV-MV image pairs using 0.8 mm slice thickness CT images were within 0.3 mm and 0.3° for translations and rotations with a 95% confidence interval (CI). No significant difference between 2D/3D and 3D/3D image registrations (P>0.05) was observed for target localization at various CT slice thicknesses ranging from 0.8 to 3 mm. Couch angles (30, 45, 60 degree) did not impact the accuracy of 2D/3D image registration. Using pattern intensity with content image filtering was recommended for 2D/3D image registration to achieve the best accuracy. For the patient study, translational error was within 2 mm and rotational error was within 0.6 degrees in terms of 95% CI for 2D/3D image registration. For intra-cranial sites, means and std. deviations of translational errors were −0.2±0.7, 0.04±0.5, 0.1±0.4 mm for LNG, LAT, VRT directions, respectively. For extra-cranial sites, means and std. deviations of translational errors were - 0.04±1, 0.2±1, 0.1±1 mm for LNG, LAT, VRT directions, respectively. 2D/3D image registration uncertainties for intra-cranial and extra-cranial sites were comparable. Conclusion: The Varian

  13. SU-D-BRA-03: Analysis of Systematic Errors with 2D/3D Image Registration for Target Localization and Treatment Delivery in Stereotactic Radiosurgery

    Energy Technology Data Exchange (ETDEWEB)

    Xu, H [Wayne State University, Detroit, MI (United States); Chetty, I; Wen, N [Henry Ford Health System, Detroit, MI (United States)

    2016-06-15

    Purpose: Determine systematic deviations between 2D/3D and 3D/3D image registrations with six degrees of freedom (6DOF) for various imaging modalities and registration algorithms on the Varian Edge Linac. Methods: The 6DOF systematic errors were assessed by comparing automated 2D/3D (kV/MV vs. CT) with 3D/3D (CBCT vs. CT) image registrations from different imaging pairs, CT slice thicknesses, couch angles, similarity measures, etc., using a Rando head and a pelvic phantom. The 2D/3D image registration accuracy was evaluated at different treatment sites (intra-cranial and extra-cranial) by statistically analyzing 2D/3D pre-treatment verification against 3D/3D localization of 192 Stereotactic Radiosurgery/Stereotactic Body Radiation Therapy treatment fractions for 88 patients. Results: The systematic errors of 2D/3D image registration using kV-kV, MV-kV and MV-MV image pairs using 0.8 mm slice thickness CT images were within 0.3 mm and 0.3° for translations and rotations with a 95% confidence interval (CI). No significant difference between 2D/3D and 3D/3D image registrations (P>0.05) was observed for target localization at various CT slice thicknesses ranging from 0.8 to 3 mm. Couch angles (30, 45, 60 degree) did not impact the accuracy of 2D/3D image registration. Using pattern intensity with content image filtering was recommended for 2D/3D image registration to achieve the best accuracy. For the patient study, translational error was within 2 mm and rotational error was within 0.6 degrees in terms of 95% CI for 2D/3D image registration. For intra-cranial sites, means and std. deviations of translational errors were −0.2±0.7, 0.04±0.5, 0.1±0.4 mm for LNG, LAT, VRT directions, respectively. For extra-cranial sites, means and std. deviations of translational errors were - 0.04±1, 0.2±1, 0.1±1 mm for LNG, LAT, VRT directions, respectively. 2D/3D image registration uncertainties for intra-cranial and extra-cranial sites were comparable. Conclusion: The Varian

  14. Bio-inspired adaptive feedback error learning architecture for motor control.

    Science.gov (United States)

    Tolu, Silvia; Vanegas, Mauricio; Luque, Niceto R; Garrido, Jesús A; Ros, Eduardo

    2012-10-01

    This study proposes an adaptive control architecture based on an accurate regression method called Locally Weighted Projection Regression (LWPR) and on a bio-inspired module, such as a cerebellar-like engine. This hybrid architecture takes full advantage of the machine learning module (LWPR kernel) to abstract an optimized representation of the sensorimotor space while the cerebellar component integrates this to generate corrective terms in the framework of a control task. Furthermore, we illustrate how the use of a simple adaptive error feedback term allows to use the proposed architecture even in the absence of an accurate analytic reference model. The presented approach achieves an accurate control with low gain corrective terms (for compliant control schemes). We evaluate the contribution of the different components of the proposed scheme comparing the obtained performance with alternative approaches. Then, we show that the presented architecture can be used for accurate manipulation of different objects when their physical properties are not directly known by the controller. We evaluate how the scheme scales for simulated plants of high Degrees of Freedom (7-DOFs).

  15. Preventing statistical errors in scientific journals.

    NARCIS (Netherlands)

    Nuijten, M.B.

    2016-01-01

    There is evidence for a high prevalence of statistical reporting errors in psychology and other scientific fields. These errors display a systematic preference for statistically significant results, distorting the scientific literature. There are several possible causes for this systematic error

  16. Analysis of influence on back-EMF based sensorless control of PMSM due to parameter variations and measurement errors

    DEFF Research Database (Denmark)

    Wang, Z.; Lu, K.; Ye, Y.

    2011-01-01

    To achieve better performance of sensorless control of PMSM, a precise and stable estimation of rotor position and speed is required. Several parameter uncertainties and variable measurement errors may lead to estimation error, such as resistance and inductance variations due to temperature...... and flux saturation, current and voltage errors due to measurement uncertainties, and signal delay caused by hardwares. This paper reveals some inherent principles for the performance of the back-EMF based sensorless algorithm embedded in a surface mounted PMSM system adapting vector control strategy...

  17. A Bayesian sequential design using alpha spending function to control type I error.

    Science.gov (United States)

    Zhu, Han; Yu, Qingzhao

    2017-10-01

    We propose in this article a Bayesian sequential design using alpha spending functions to control the overall type I error in phase III clinical trials. We provide algorithms to calculate critical values, power, and sample sizes for the proposed design. Sensitivity analysis is implemented to check the effects from different prior distributions, and conservative priors are recommended. We compare the power and actual sample sizes of the proposed Bayesian sequential design with different alpha spending functions through simulations. We also compare the power of the proposed method with frequentist sequential design using the same alpha spending function. Simulations show that, at the same sample size, the proposed method provides larger power than the corresponding frequentist sequential design. It also has larger power than traditional Bayesian sequential design which sets equal critical values for all interim analyses. When compared with other alpha spending functions, O'Brien-Fleming alpha spending function has the largest power and is the most conservative in terms that at the same sample size, the null hypothesis is the least likely to be rejected at early stage of clinical trials. And finally, we show that adding a step of stop for futility in the Bayesian sequential design can reduce the overall type I error and reduce the actual sample sizes.

  18. VLSI IMPLEMENTATION OF NOVEL ROUND KEYS GENERATION SCHEME FOR CRYPTOGRAPHY APPLICATIONS BY ERROR CONTROL ALGORITHM

    Directory of Open Access Journals (Sweden)

    B. SENTHILKUMAR

    2015-05-01

    Full Text Available A novel implementation of code based cryptography (Cryptocoding technique for multi-layer key distribution scheme is presented. VLSI chip is designed for storing information on generation of round keys. New algorithm is developed for reduced key size with optimal performance. Error Control Algorithm is employed for both generation of round keys and diffusion of non-linearity among them. Two new functions for bit inversion and its reversal are developed for cryptocoding. Probability of retrieving original key from any other round keys is reduced by diffusing nonlinear selective bit inversions on round keys. Randomized selective bit inversions are done on equal length of key bits by Round Constant Feedback Shift Register within the error correction limits of chosen code. Complexity of retrieving the original key from any other round keys is increased by optimal hardware usage. Proposed design is simulated and synthesized using VHDL coding for Spartan3E FPGA and results are shown. Comparative analysis is done between 128 bit Advanced Encryption Standard round keys and proposed round keys for showing security strength of proposed algorithm. This paper concludes that chip based multi-layer key distribution of proposed algorithm is an enhanced solution to the existing threats on cryptography algorithms.

  19. Controlling type I error rate for fast track drug development programmes.

    Science.gov (United States)

    Shih, Weichung J; Ouyang, Peter; Quan, Hui; Lin, Yong; Michiels, Bart; Bijnens, Luc

    2003-03-15

    The U.S. Food and Drug Administration (FDA) Modernization Act of 1997 has a Section (No. 112) entitled 'Expediting Study and Approval of Fast Track Drugs' (the Act). In 1998, the FDA issued a 'Guidance for Industry: the Fast Track Drug Development Programs' (the FTDD programmes) to meet the requirement of the Act. The purpose of FTDD programmes is to 'facilitate the development and expedite the review of new drugs that are intended to treat serious or life-threatening conditions and that demonstrate the potential to address unmet medical needs'. Since then many health products have reached patients who suffered from AIDS, cancer, osteoporosis, and many other diseases, sooner by utilizing the Fast Track Act and the FTDD programmes. In the meantime several scientific issues have also surfaced when following the FTDD programmes. In this paper we will discuss the concept of two kinds of type I errors, namely, the 'conditional approval' and the 'final approval' type I errors, and propose statistical methods for controlling them in a new drug submission process. Copyright 2003 John Wiley & Sons, Ltd.

  20. Automatic component calibration and error diagnostics for model-based accelerator control. Phase I final report

    International Nuclear Information System (INIS)

    Carl Stern; Martin Lee

    1999-01-01

    Phase I work studied the feasibility of developing software for automatic component calibration and error correction in beamline optics models. A prototype application was developed that corrects quadrupole field strength errors in beamline models

  1. Automatic component calibration and error diagnostics for model-based accelerator control. Phase I final report

    CERN Document Server

    Carl-Stern

    1999-01-01

    Phase I work studied the feasibility of developing software for automatic component calibration and error correction in beamline optics models. A prototype application was developed that corrects quadrupole field strength errors in beamline models.

  2. Systematic realisation of control flow analyses for CML

    DEFF Research Database (Denmark)

    Gasser, K.L.S.; Nielson, Flemming; Nielson, Hanne Riis

    1997-01-01

    We present a methodology for the systematic realisation of control flow analyses and illustrate it for Concurrent ML. We start with an abstract specification of the analysis that is next proved semantically sound with respect to a traditional small-step operational semantics; this result holds......) to be defined in a syntax-directed manner, and (iii) to generate a set of constraints that subsequently can be solved by standard techniques. We prove equivalence results between the different versions of the analysis; in particular it follows that the least solution to the constraints generated...

  3. Postural control in chronic obstructive pulmonary disease: a systematic review

    Directory of Open Access Journals (Sweden)

    Porto EF

    2015-06-01

    Full Text Available EF Porto,1,2 AAM Castro,1,3 VGS Schmidt,4 HM Rabelo,4 C Kümpel,2 OA Nascimento,5 JR Jardim5 1Pulmonary Rehabilitation Center, Federal University of São Paulo, 2Adventist University, São Paulo, 3Federal University of Pampa, Rio Grande do Sul, 4Pulmonary Rehabilitation Center, Adventist University, 5Respiratory Diseases, Pulmonary Rehabilitation Center, Federal University of São Paulo, São Paulo, Brazil Abstract: Patients with chronic obstructive pulmonary disease (COPD fall frequently, although the risk of falls may seem less important than the respiratory consequences of the disease. Nevertheless, falls are associated to increased mortality, decreased independence and physical activity levels, and worsening of quality of life. The aims of this systematic review was to evaluate information in the literature with regard to whether impaired postural control is more prevalent in COPD patients than in healthy age-matched subjects, and to assess the main characteristics these patients present that contribute to impaired postural control.Methods: Five databases were searched with no dates or language limits. The MEDLINE, PubMed, EMBASE, Web of Science, and PEDro databases were searched using “balance”, “postural control”, and “COPD” as keywords. The search strategies were oriented and guided by a health science librarian and were performed on March 27, 2014. The studies included were those that evaluated postural control in COPD patients as their main outcome and scored more than five points on the PEDro scale. Studies supplied by the database search strategy were assessed independently by two blinded researchers.Results: A total of 484 manuscripts were found using the “balance in COPD or postural control in COPD” keywords. Forty-three manuscripts appeared more than once, and 397 did not evaluate postural control in COPD patients as the primary outcome. Thus, only 14 studies had postural control as their primary outcome. Our study

  4. A New Approach to Detection of Systematic Errors in Secondary Substation Monitoring Equipment Based on Short Term Load Forecasting

    Directory of Open Access Journals (Sweden)

    Javier Moriano

    2016-01-01

    Full Text Available In recent years, Secondary Substations (SSs are being provided with equipment that allows their full management. This is particularly useful not only for monitoring and planning purposes but also for detecting erroneous measurements, which could negatively affect the performance of the SS. On the other hand, load forecasting is extremely important since they help electricity companies to make crucial decisions regarding purchasing and generating electric power, load switching, and infrastructure development. In this regard, Short Term Load Forecasting (STLF allows the electric power load to be predicted over an interval ranging from one hour to one week. However, important issues concerning error detection by employing STLF has not been specifically addressed until now. This paper proposes a novel STLF-based approach to the detection of gain and offset errors introduced by the measurement equipment. The implemented system has been tested against real power load data provided by electricity suppliers. Different gain and offset error levels are successfully detected.

  5. Knowledge-based errors in anesthesia: a paired, controlled trial of learning and retention.

    Science.gov (United States)

    Goldhaber-Fiebert, Sara N; Goldhaber-Fiebert, Jeremy D; Rosow, Carl E

    2009-01-01

    Optimizing patient safety by improving the training of physicians is a major challenge of medical education. In this pilot study, we hypothesized that a brief lecture, targeted to rare but potentially dangerous situations, could improve anesthesia practitioners' knowledge levels with significant retention of learning at six months. In this paired controlled trial, anesthesia residents and attending physicians at Massachusetts General Hospital took the same 14-question multiple choice examination three times: at baseline, immediately after a brief lecture, and six months later. The lecture covered material on seven "intervention" questions; the remaining seven were "control" questions. The authors measured immediate knowledge acquisition, defined as the change in percentage of correct answers on intervention questions between baseline and post-lecture, and measured learning retention as the difference between baseline and six months. Both measurements were corrected for change in performance on control questions. Fifty of the 89 subjects completed all three examinations. The post-lecture increase in percentage of questions answered correctly, adjusted for control, was 22.2% [95% confidence interval (CI) 16.0-28.4%; P learning at six months. Exposing residents or other practitioners to this type of inexpensive teaching intervention may help them to avoid preventable uncommon errors that are rooted in unfamiliarity with the situation or the equipment. The methods used for this study may also be applied to compare the effect of various other teaching modalities while, at the same time, preserving participant anonymity and making adjustments for ongoing learning.

  6. Systematic analysis of video data from different human–robot interaction studies: a categorization of social signals during error situations

    Science.gov (United States)

    Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred

    2015-01-01

    Human–robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human–robot interaction experiments. For that, we analyzed 201 videos of five human–robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human–robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies. PMID:26217266

  7. Systematic analysis of video data from different human-robot interaction studies: a categorization of social signals during error situations.

    Science.gov (United States)

    Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred

    2015-01-01

    Human-robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human-robot interaction experiments. For that, we analyzed 201 videos of five human-robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human-robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies.

  8. Postural control in chronic obstructive pulmonary disease: a systematic review.

    Science.gov (United States)

    Porto, E F; Castro, A A M; Schmidt, V G S; Rabelo, H M; Kümpel, C; Nascimento, O A; Jardim, J R

    2015-01-01

    Patients with chronic obstructive pulmonary disease (COPD) fall frequently, although the risk of falls may seem less important than the respiratory consequences of the disease. Nevertheless, falls are associated to increased mortality, decreased independence and physical activity levels, and worsening of quality of life. The aims of this systematic review was to evaluate information in the literature with regard to whether impaired postural control is more prevalent in COPD patients than in healthy age-matched subjects, and to assess the main characteristics these patients present that contribute to impaired postural control. Five databases were searched with no dates or language limits. The MEDLINE, PubMed, EMBASE, Web of Science, and PEDro databases were searched using "balance", "postural control", and "COPD" as keywords. The search strategies were oriented and guided by a health science librarian and were performed on March 27, 2014. The studies included were those that evaluated postural control in COPD patients as their main outcome and scored more than five points on the PEDro scale. Studies supplied by the database search strategy were assessed independently by two blinded researchers. A total of 484 manuscripts were found using the "balance in COPD or postural control in COPD" keywords. Forty-three manuscripts appeared more than once, and 397 did not evaluate postural control in COPD patients as the primary outcome. Thus, only 14 studies had postural control as their primary outcome. Our study examiners found only seven studies that had a PEDro score higher than five points. The examiners' interrater agreement was 76.4%. Six of those studies were accomplished with a control group and one study used their patients as their own controls. The studies were published between 2004 and 2013. Patients with COPD present postural control impairment when compared with age-matched healthy controls. Associated factors contributing to impaired postural control were

  9. The impact of a brief mindfulness meditation intervention on cognitive control and error-related performance monitoring

    Directory of Open Access Journals (Sweden)

    Michael J Larson

    2013-07-01

    Full Text Available Meditation is associated with positive health behaviors and improved cognitive control. One mechanism for the relationship between meditation and cognitive control is changes in activity of the anterior cingulate cortex-mediated neural pathways. The error-related negativity (ERN and error positivity (Pe components of the scalp-recorded event-related potential (ERP represent cingulate-mediated functions of performance monitoring that may be modulated by mindfulness meditation. We utilized a flanker task, an experimental design, and a brief mindfulness intervention in a sample of 55 healthy non-meditators (n = 28 randomly assigned to the mindfulness group and n = 27 randomly assigned to the control group to examine autonomic nervous system functions as measured by blood pressure and indices of cognitive control as measured by response times, error rates, post-error slowing, and the ERN and Pe components of the ERP. Systolic blood pressure significantly differentiated groups following the mindfulness intervention and following the flanker task. There were non-significant differences between the mindfulness and control groups for response times, post-error slowing, and error rates on the flanker task. Amplitude and latency of the ERN did not differ between groups; however, amplitude of the Pe was significantly smaller in individuals in the mindfulness group than in the control group. Findings suggest that a brief mindfulness intervention is associated with reduced autonomic arousal and decreased amplitude of the Pe, an ERP associated with error awareness, attention, and motivational salience, but does not alter amplitude of the ERN or behavioral performance. Implications for brief mindfulness interventions and state versus trait affect theories of the ERN are discussed. Future research examining graded levels of mindfulness and tracking error awareness will clarify relationship between mindfulness and performance monitoring.

  10. Working memory and inhibitory control across the life span: Intrusion errors in the Reading Span Test.

    Science.gov (United States)

    Robert, Christelle; Borella, Erika; Fagot, Delphine; Lecerf, Thierry; de Ribaupierre, Anik

    2009-04-01

    The aim of this study was to examine to what extent inhibitory control and working memory capacity are related across the life span. Intrusion errors committed by children and younger and older adults were investigated in two versions of the Reading Span Test. In Experiment 1, a mixed Reading Span Test with items of various list lengths was administered. Older adults and children recalled fewer correct words and produced more intrusions than did young adults. Also, age-related differences were found in the type of intrusions committed. In Experiment 2, an adaptive Reading Span Test was administered, in which the list length of items was adapted to each individual's working memory capacity. Age groups differed neither on correct recall nor on the rate of intrusions, but they differed on the type of intrusions. Altogether, these findings indicate that the availability of attentional resources influences the efficiency of inhibition across the life span.

  11. Determination of fission products and actinides by inductively coupled plasma-mass spectrometry using isotope dilution analysis. A study of random and systematic errors

    International Nuclear Information System (INIS)

    Ignacio Garcia Alonso, Jose

    1995-01-01

    The theory of the propagation of errors (random and systematic) for isotope dilution analysis (IDA) has been applied to the analysis of fission products and actinide elements by inductively coupled plasma-mass spectrometry (ICP-MS). Systematic errors in ID-ICP-MS arising from mass-discrimination (mass bias), detector non-linearity and isobaric interferences in the measured isotopes have to be corrected for in order to achieve accurate results. The mass bias factor and the detector dead-time can be determined by using natural elements with well-defined isotope abundances. A combined method for the simultaneous determination of both factors is proposed. On the other hand, isobaric interferences for some fission products and actinides cannot be eliminated using mathematical corrections (due to the unknown isotope abundances in the sample) and a chemical separation is necessary. The theory for random error propagation in IDA has been applied to the determination of non-natural elements by ICP-MS taking into account all possible sources of uncertainty with pulse counting detection. For the analysis of fission products, the selection of the right spike isotope composition and spike to sample ratio can be performed by applying conventional random propagation theory. However, it has been observed that, in the experimental determination of the isotope abundances of the fission product elements to be determined, the correction for mass-discrimination and the correction for detector dead-time losses contribute to the total random uncertainty. For the instrument used in the experimental part of this study, it was found that the random uncertainty on the measured isotope ratios followed Poisson statistics for low counting rates whereas, for high counting rates, source instability was the main source of error

  12. Adaptive finite element analysis of incompressible viscous flow using posteriori error estimation and control of node density distribution

    International Nuclear Information System (INIS)

    Yashiki, Taturou; Yagawa, Genki; Okuda, Hiroshi

    1995-01-01

    The adaptive finite element method based on an 'a posteriori error estimation' is known to be a powerful technique for analyzing the engineering practical problems, since it excludes the instinctive aspect of the mesh subdivision and gives high accuracy with relatively low computational cost. In the adaptive procedure, both the error estimation and the mesh generation according to the error estimator are essential. In this paper, the adaptive procedure is realized by the automatic mesh generation based on the control of node density distribution, which is decided according to the error estimator. The global percentage error, CPU time, the degrees of freedom and the accuracy of the solution of the adaptive procedure are compared with those of the conventional method using regular meshes. Such numerical examples as the driven cavity flows of various Reynolds numbers and the flows around a cylinder have shown the very high performance of the proposed adaptive procedure. (author)

  13. SU-F-T-241: Reduction in Planning Errors Via a Process Control Developed Using the Eclipse Scripting API

    Energy Technology Data Exchange (ETDEWEB)

    Barbee, D; McCarthy, A; Galavis, P; Xu, A [NYU Langone Medical Center, New York, NY (United States)

    2016-06-15

    Purpose: Errors found during initial physics plan checks frequently require replanning and reprinting, resulting decreased departmental efficiency. Additionally, errors may be missed during physics checks, resulting in potential treatment errors or interruption. This work presents a process control created using the Eclipse Scripting API (ESAPI) enabling dosimetrists and physicists to detect potential errors in the Eclipse treatment planning system prior to performing any plan approvals or printing. Methods: Potential failure modes for five categories were generated based on available ESAPI (v11) patient object properties: Images, Contours, Plans, Beams, and Dose. An Eclipse script plugin (PlanCheck) was written in C# to check errors most frequently observed clinically in each of the categories. The PlanCheck algorithms were devised to check technical aspects of plans, such as deliverability (e.g. minimum EDW MUs), in addition to ensuring that policy and procedures relating to planning were being followed. The effect on clinical workflow efficiency was measured by tracking the plan document error rate and plan revision/retirement rates in the Aria database over monthly intervals. Results: The number of potential failure modes the PlanCheck script is currently capable of checking for in the following categories: Images (6), Contours (7), Plans (8), Beams (17), and Dose (4). Prior to implementation of the PlanCheck plugin, the observed error rates in errored plan documents and revised/retired plans in the Aria database was 20% and 22%, respectively. Error rates were seen to decrease gradually over time as adoption of the script improved. Conclusion: A process control created using the Eclipse scripting API enabled plan checks to occur within the planning system, resulting in reduction in error rates and improved efficiency. Future work includes: initiating full FMEA for planning workflow, extending categories to include additional checks outside of ESAPI via Aria

  14. Vision Servo Motion Control and Error Analysis of a Coplanar XXY Stage for Image Alignment Motion

    Directory of Open Access Journals (Sweden)

    Hau-Wei Lee

    2013-01-01

    Full Text Available In recent years, as there is demand for smart mobile phones with touch panels, the alignment/compensation system of alignment stage with vision servo control has also increased. Due to the fact that the traditional stacked-type XYθ stage has cumulative errors of assembly and it is heavy, it has been gradually replaced by the coplanar stage characterized by three actuators on the same plane with three degrees of freedom. The simplest image alignment mode uses two cameras as the equipments for feedback control, and the work piece is placed on the working stage. The work piece is usually engraved/marked. After the cameras capture images and when the position of the mark in the camera is obtained by image processing, the mark can be moved to the designated position in the camera by moving the stage and using alignment algorithm. This study used a coplanar XXY stage with 1 μm positioning resolution. Due to the fact that the resolution of the camera is about 3.75 μm per pixel, thus a subpixel technology is used, and the linear and angular alignment repeatability of the alignment system can achieve 1 μm and 5 arcsec, respectively. The visual servo motion control for alignment motion is completed within 1 second using the coplanar XXY stage.

  15. Systematic review on international practices in controlling waterpipe tobacco smoking.

    Science.gov (United States)

    Tee, Guat Hiong; Hairi, Noran N; Nordin, Fauziah; Choo, Wan Yuen; Chan, Ying Ying; Kaur, Gurpreet; Veerasingam, Pathma Devi; Bulgiba, Awang

    2015-01-01

    Waterpipe tobacco smoking has becoming popular especially among young people worldwide. Smokers are attracted by its sweeter, smoother smoke, social ambience and the misconception of reduced harm. The objective of this study was to systematically review the effects of waterpipe tobacco policies and practices in reducing its prevalence. A systematic review was conducted electronically using the PubMed, OVID, Science Direct, Proquest and Embase databases. All possible studies from 1980 to 2013 were initially screened based on titles and abstracts. The selected articles were subjected to data extraction and quality rating. Three studies met the inclusion criteria and were eligible for this review. Almost all of the waterpipe tobacco products and its accessories did not comply with the regulations on health warning labelling practices as stipulated under Article 11 of WHO FCTC. In addition, the grisly new warning labels for cigarettes introduced by Food and Drug Administration did not affect hookah tobacco smoking generally. Indoor air quality in smoking lounges was found to be poor and some hookah lounges were operated without smoke shop certification. Our findings revealed the availability of minimal information on the practices in controlling waterpipe smoking in reducing its prevalence. The lack of comprehensive legislations or practices in controlling waterpipe smoking warrants further research and policy initiatives to curb this burgeoning global epidemic, especially among the vulnerable younger population.

  16. The Institute for Safe Medication Practices and Poison Control Centers: Collaborating to Prevent Medication Errors and Unintentional Poisonings.

    Science.gov (United States)

    Vaida, Allen J

    2015-06-01

    This article provides an overview on the Institute for Safe Medication Practices (ISMP), the only independent nonprofit organization in the USA devoted to the prevention of medication errors. ISMP developed the national Medication Errors Reporting Program (MERP) and investigates and analyzes errors in order to formulate recommendations to prevent further occurrences. ISMP works closely with the US Food and Drug Administration (FDA), drug manufacturers, professional organizations, and others to promote changes in package design, practice standards, and healthcare practitioner and consumer education. By collaborating with ISMP to share and disseminate information, Poison Control centers, emergency departments, and toxicologists can help decrease unintentional and accidental poisonings.

  17. Resisting attraction: Individual differences in executive control are associated with subject-verb agreement errors in production.

    Science.gov (United States)

    Veenstra, Alma; Antoniou, Kyriakos; Katsos, Napoleon; Kissine, Mikhail

    2018-04-19

    We propose that attraction errors in agreement production (e.g., the key to the cabinets are missing) are related to two components of executive control: working memory and inhibitory control. We tested 138 children aged 10 to 12, an age when children are expected to produce high rates of errors. To increase the potential of individual variation in executive control skills, participants came from monolingual, bilingual, and bidialectal language backgrounds. Attraction errors were elicited with a picture description task in Dutch and executive control was measured with a digit span task, Corsi blocks task, switching task, and attentional networks task. Overall, higher rates of attraction errors were negatively associated with higher verbal working memory and, independently, with higher inhibitory control. To our knowledge, this is the first demonstration of the role of both working memory and inhibitory control in attraction errors in production. Implications for memory- and grammar-based models are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  18. Toward a Framework for Systematic Error Modeling of NASA Spaceborne Radar with NOAA/NSSL Ground Radar-Based National Mosaic QPE

    Science.gov (United States)

    Kirstettier, Pierre-Emmanual; Honh, Y.; Gourley, J. J.; Chen, S.; Flamig, Z.; Zhang, J.; Howard, K.; Schwaller, M.; Petersen, W.; Amitai, E.

    2011-01-01

    Characterization of the error associated to satellite rainfall estimates is a necessary component of deterministic and probabilistic frameworks involving space-born passive and active microwave measurement") for applications ranging from water budget studies to forecasting natural hazards related to extreme rainfall events. We focus here on the error structure of NASA's Tropical Rainfall Measurement Mission (TRMM) Precipitation Radar (PR) quantitative precipitation estimation (QPE) at ground. The problem is addressed by comparison of PR QPEs with reference values derived from ground-based measurements using NOAA/NSSL ground radar-based National Mosaic and QPE system (NMQ/Q2). A preliminary investigation of this subject has been carried out at the PR estimation scale (instantaneous and 5 km) using a three-month data sample in the southern part of US. The primary contribution of this study is the presentation of the detailed steps required to derive trustworthy reference rainfall dataset from Q2 at the PR pixel resolution. It relics on a bias correction and a radar quality index, both of which provide a basis to filter out the less trustworthy Q2 values. Several aspects of PR errors arc revealed and quantified including sensitivity to the processing steps with the reference rainfall, comparisons of rainfall detectability and rainfall rate distributions, spatial representativeness of error, and separation of systematic biases and random errors. The methodology and framework developed herein applies more generally to rainfall rate estimates from other sensors onboard low-earth orbiting satellites such as microwave imagers and dual-wavelength radars such as with the Global Precipitation Measurement (GPM) mission.

  19. Slotted rotatable target assembly and systematic error analysis for a search for long range spin dependent interactions from exotic vector boson exchange using neutron spin rotation

    Science.gov (United States)

    Haddock, C.; Crawford, B.; Fox, W.; Francis, I.; Holley, A.; Magers, S.; Sarsour, M.; Snow, W. M.; Vanderwerp, J.

    2018-03-01

    We discuss the design and construction of a novel target array of nonmagnetic test masses used in a neutron polarimetry measurement made in search for new possible exotic spin dependent neutron-atominteractions of Nature at sub-mm length scales. This target was designed to accept and efficiently transmit a transversely polarized slow neutron beam through a series of long open parallel slots bounded by flat rectangular plates. These openings possessed equal atom density gradients normal to the slots from the flat test masses with dimensions optimized to achieve maximum sensitivity to an exotic spin-dependent interaction from vector boson exchanges with ranges in the mm - μm regime. The parallel slots were oriented differently in four quadrants that can be rotated about the neutron beam axis in discrete 90°increments using a Geneva drive. The spin rotation signals from the 4 quadrants were measured using a segmented neutron ion chamber to suppress possible systematic errors from stray magnetic fields in the target region. We discuss the per-neutron sensitivity of the target to the exotic interaction, the design constraints, the potential sources of systematic errors which could be present in this design, and our estimate of the achievable sensitivity using this method.

  20. Integration of error tolerance into the design of control rooms of nuclear power plants

    International Nuclear Information System (INIS)

    Sepanloo, Kamran

    1998-08-01

    Many complex technological systems' failures have been attributed to human errors. Today, based on extensive research on the role of human element in technological systems it is known that human error can not totally be eliminated in modern, flexible, or changing work environments by conventional style design strategies(e.g. defence in depth), or better instructions nor should they be. Instead, the operators' ability to explore degrees of freedom should be supported and means for recovering from the effects of errors should be included. This calls for innovative error tolerant design of technological systems. Integration of error tolerant concept into the design, construction, startup, and operation of nuclear power plants provides an effective means of reducing human error occurrence during all stages of life of it and therefore leads to considerable enhancement of plant's safety

  1. Detecting and correcting partial errors: Evidence for efficient control without conscious access.

    Science.gov (United States)

    Rochet, N; Spieser, L; Casini, L; Hasbroucq, T; Burle, B

    2014-09-01

    Appropriate reactions to erroneous actions are essential to keeping behavior adaptive. Erring, however, is not an all-or-none process: electromyographic (EMG) recordings of the responding muscles have revealed that covert incorrect response activations (termed "partial errors") occur on a proportion of overtly correct trials. The occurrence of such "partial errors" shows that incorrect response activations could be corrected online, before turning into overt errors. In the present study, we showed that, unlike overt errors, such "partial errors" are poorly consciously detected by participants, who could report only one third of their partial errors. Two parameters of the partial errors were found to predict detection: the surface of the incorrect EMG burst (larger for detected) and the correction time (between the incorrect and correct EMG onsets; longer for detected). These two parameters provided independent information. The correct(ive) responses associated with detected partial errors were larger than the "pure-correct" ones, and this increase was likely a consequence, rather than a cause, of the detection. The respective impacts of the two parameters predicting detection (incorrect surface and correction time), along with the underlying physiological processes subtending partial-error detection, are discussed.

  2. Study on a new framework of Human Reliability Analysis to evaluate soft control execution error in advanced MCRs of NPPs

    International Nuclear Information System (INIS)

    Jang, Inseok; Kim, Ar Ryum; Jung, Wondea; Seong, Poong Hyun

    2016-01-01

    Highlights: • The operation environment of MCRs in NPPs has changed by adopting new HSIs. • The operation action in NPP Advanced MCRs is performed by soft control. • New HRA framework should be considered in the HRA for advanced MCRs. • HRA framework for evaluation of soft control execution human error is suggested. • Suggested method will be helpful to analyze human reliability in advance MCRs. - Abstract: Since the Three Mile Island (TMI)-2 accident, human error has been recognized as one of the main causes of Nuclear Power Plant (NPP) accidents, and numerous studies related to Human Reliability Analysis (HRA) have been carried out. Most of these methods were developed considering the conventional type of Main Control Rooms (MCRs). However, the operating environment of MCRs in NPPs has changed with the adoption of new Human-System Interfaces (HSIs) that are based on computer-based technologies. The MCRs that include these digital technologies, such as large display panels, computerized procedures, and soft controls, are called advanced MCRs. Among the many features of advanced MCRs, soft controls are a particularly important feature because operating actions in NPP advanced MCRs are performed by soft control. Due to the differences in interfaces between soft control and hardwired conventional type control, different Human Error Probabilities (HEPs) and a new HRA framework should be considered in the HRA for advanced MCRs. To this end, a new framework of a HRA method for evaluating soft control execution human error is suggested by performing a soft control task analysis and the literature regarding widely accepted human error taxonomies is reviewed. Moreover, since most current HRA databases deal with operation in conventional MCRs and are not explicitly designed to deal with digital HSIs, empirical analysis of human error and error recovery considering soft controls under an advanced MCR mockup are carried out to collect human error data, which is

  3. Does the GPM mission improve the systematic error component in satellite rainfall estimates over TRMM? An evaluation at a pan-India scale

    Science.gov (United States)

    Beria, Harsh; Nanda, Trushnamayee; Singh Bisht, Deepak; Chatterjee, Chandranath

    2017-12-01

    The last couple of decades have seen the outburst of a number of satellite-based precipitation products with Tropical Rainfall Measuring Mission (TRMM) as the most widely used for hydrologic applications. Transition of TRMM into the Global Precipitation Measurement (GPM) promises enhanced spatio-temporal resolution along with upgrades to sensors and rainfall estimation techniques. The dependence of systematic error components in rainfall estimates of the Integrated Multi-satellitE Retrievals for GPM (IMERG), and their variation with climatology and topography, was evaluated over 86 basins in India for year 2014 and compared with the corresponding (2014) and retrospective (1998-2013) TRMM estimates. IMERG outperformed TRMM for all rainfall intensities across a majority of Indian basins, with significant improvement in low rainfall estimates showing smaller negative biases in 75 out of 86 basins. Low rainfall estimates in TRMM showed a systematic dependence on basin climatology, with significant overprediction in semi-arid basins, which gradually improved in the higher rainfall basins. Medium and high rainfall estimates of TRMM exhibited a strong dependence on basin topography, with declining skill in higher elevation basins. The systematic dependence of error components on basin climatology and topography was reduced in IMERG, especially in terms of topography. Rainfall-runoff modeling using the Variable Infiltration Capacity (VIC) model over two flood-prone basins (Mahanadi and Wainganga) revealed that improvement in rainfall estimates in IMERG did not translate into improvement in runoff simulations. More studies are required over basins in different hydroclimatic zones to evaluate the hydrologic significance of IMERG.

  4. Does the GPM mission improve the systematic error component in satellite rainfall estimates over TRMM? An evaluation at a pan-India scale

    Directory of Open Access Journals (Sweden)

    H. Beria

    2017-12-01

    Full Text Available The last couple of decades have seen the outburst of a number of satellite-based precipitation products with Tropical Rainfall Measuring Mission (TRMM as the most widely used for hydrologic applications. Transition of TRMM into the Global Precipitation Measurement (GPM promises enhanced spatio-temporal resolution along with upgrades to sensors and rainfall estimation techniques. The dependence of systematic error components in rainfall estimates of the Integrated Multi-satellitE Retrievals for GPM (IMERG, and their variation with climatology and topography, was evaluated over 86 basins in India for year 2014 and compared with the corresponding (2014 and retrospective (1998–2013 TRMM estimates. IMERG outperformed TRMM for all rainfall intensities across a majority of Indian basins, with significant improvement in low rainfall estimates showing smaller negative biases in 75 out of 86 basins. Low rainfall estimates in TRMM showed a systematic dependence on basin climatology, with significant overprediction in semi-arid basins, which gradually improved in the higher rainfall basins. Medium and high rainfall estimates of TRMM exhibited a strong dependence on basin topography, with declining skill in higher elevation basins. The systematic dependence of error components on basin climatology and topography was reduced in IMERG, especially in terms of topography. Rainfall-runoff modeling using the Variable Infiltration Capacity (VIC model over two flood-prone basins (Mahanadi and Wainganga revealed that improvement in rainfall estimates in IMERG did not translate into improvement in runoff simulations. More studies are required over basins in different hydroclimatic zones to evaluate the hydrologic significance of IMERG.

  5. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  6. Value-based HR practices, i-deals and clinical error control with CSR as a moderator.

    Science.gov (United States)

    Luu, Tuan; Rowley, Chris; Siengthai, Sununta; Thanh Thao, Vo

    2017-05-08

    Purpose Notwithstanding the rising magnitude of system factors in patient safety improvement, "human factors" such as idiosyncratic deals (i-deals) which also contribute to the adjustment of system deficiencies should not be neglected. The purpose of this paper is to investigate the role of value-based HR practices in catalyzing i-deals, which then influence clinical error control. The research further examines the moderating role of corporate social responsibility (CSR) on the effect of value-based HR practices on i-deals. Design/methodology/approach The data were collected from middle-level clinicians from hospitals in the Vietnam context. Findings The research results confirmed the effect chain from value-based HR practices through i-deals to clinical error control with CSR as a moderator. Originality/value The HRM literature is expanded through enlisting i-deals and clinical error control as the outcomes of HR practices.

  7. Robust a Posteriori Error Control and Adaptivity for Multiscale, Multinumerics, and Mortar Coupling

    KAUST Repository

    Pencheva, Gergina V.; Vohralí k, Martin; Wheeler, Mary F.; Wildey, Tim

    2013-01-01

    -order polynomials are used on the mortar interface mesh. We derive several fully computable a posteriori error estimates which deliver a guaranteed upper bound on the error measured in the energy norm. Our estimates are also locally efficient and one of them

  8. Validation of the calculation of the renal impulse response function. An analysis of errors and systematic biases

    International Nuclear Information System (INIS)

    Erbsman, F.; Ham, H.; Piepsz, A.; Struyven, J.

    1978-01-01

    The renal impulse response function (Renal IRF) is the time-activity curve measured over one kidney after injection of a radiopharmaceutical in the renal artery. If the tracer is injected intravenously it is possible to compute the renal IRF by deconvoluting the kidney curve by a blood curve. In previous work we demonstrated that the computed IRF is in good agreement with measurements made after injection in the renal artery. The goal of the present work is the analysis of the effect of sampling errors and the influence of extra-renal activity. The sampling error is only important for the first point of the plasma curve and yields an ill-conditioned function P -1 . The addition of 50 computed renal IRF's demonstrated that the three first points show a larger variability due to incomplete mixing of the tracer. These points should thus not be included in the smoothing process. Subtraction of non-renal activity does not modify appreciably the shape of the renal IRF. The mean transit time and the time to half value are almost independent of non-renal activity and seem to be the parameters of choice

  9. 16-bit error detection and correction (EDAC) controller design using FPGA for critical memory applications

    International Nuclear Information System (INIS)

    Misra, M.K.; Sridhar, N.; Krishnakumar, B.; Ilango Sambasivan, S.

    2002-01-01

    Full text: Complex electronic systems require the utmost reliability, especially when the storage and retrieval of critical data demands faultless operation, the system designer must strive for the highest reliability possible. Extra effort must be expended to achieve this reliability. Fortunately, not all systems must operate with these ultra reliability requirements. The majority of systems operate in an area where system failure is not hazardous. But the applications like nuclear reactors, medical and avionics are the areas where system failure may prove to have harsh consequences. High-density memories generate errors in their stored data due to external disturbances like power supply surges, system noise, natural radiation etc. These errors are called soft errors or transient errors, since they don't cause permanent damage to the memory cell. Hard errors may also occur on system memory boards. These hard errors occur if one RAM component or RAM cell fails and is stuck at either 0 or 1. Although less frequent, hard errors may cause a complete system failure. These are the major problems associated with memories

  10. Alterations in Neural Control of Constant Isometric Contraction with the Size of Error Feedback.

    Directory of Open Access Journals (Sweden)

    Ing-Shiou Hwang

    Full Text Available Discharge patterns from a population of motor units (MUs were estimated with multi-channel surface electromyogram and signal processing techniques to investigate parametric differences in low-frequency force fluctuations, MU discharges, and force-discharge relation during static force-tracking with varying sizes of execution error presented via visual feedback. Fourteen healthy adults produced isometric force at 10% of maximal voluntary contraction through index abduction under three visual conditions that scaled execution errors with different amplification factors. Error-augmentation feedback that used a high amplification factor (HAF to potentiate visualized error size resulted in higher sample entropy, mean frequency, ratio of high-frequency components, and spectral dispersion of force fluctuations than those of error-reducing feedback using a low amplification factor (LAF. In the HAF condition, MUs with relatively high recruitment thresholds in the dorsal interosseous muscle exhibited a larger coefficient of variation for inter-spike intervals and a greater spectral peak of the pooled MU coherence at 13-35 Hz than did those in the LAF condition. Manipulation of the size of error feedback altered the force-discharge relation, which was characterized with non-linear approaches such as mutual information and cross sample entropy. The association of force fluctuations and global discharge trace decreased with increasing error amplification factor. Our findings provide direct neurophysiological evidence that favors motor training using error-augmentation feedback. Amplification of the visualized error size of visual feedback could enrich force gradation strategies during static force-tracking, pertaining to selective increases in the discharge variability of higher-threshold MUs that receive greater common oscillatory inputs in the β-band.

  11. Analysis and mitigation of systematic errors in spectral shearing interferometry of pulses approaching the single-cycle limit [Invited

    International Nuclear Information System (INIS)

    Birge, Jonathan R.; Kaertner, Franz X.

    2008-01-01

    We derive an analytical approximation for the measured pulse width error in spectral shearing methods, such as spectral phase interferometry for direct electric-field reconstruction (SPIDER), caused by an anomalous delay between the two sheared pulse components. This analysis suggests that, as pulses approach the single-cycle limit, the resulting requirements on the calibration and stability of this delay become significant, requiring precision orders of magnitude higher than the scale of a wavelength. This is demonstrated by numerical simulations of SPIDER pulse reconstruction using actual data from a sub-two-cycle laser. We briefly propose methods to minimize the effects of this sensitivity in SPIDER and review variants of spectral shearing that attempt to avoid this difficulty

  12. Benzodiazepine Use During Hospitalization: Automated Identification of Potential Medication Errors and Systematic Assessment of Preventable Adverse Events.

    Directory of Open Access Journals (Sweden)

    David Franklin Niedrig

    Full Text Available Benzodiazepines and "Z-drug" GABA-receptor modulators (BDZ are among the most frequently used drugs in hospitals. Adverse drug events (ADE associated with BDZ can be the result of preventable medication errors (ME related to dosing, drug interactions and comorbidities. The present study evaluated inpatient use of BDZ and related ME and ADE.We conducted an observational study within a pharmacoepidemiological database derived from the clinical information system of a tertiary care hospital. We developed algorithms that identified dosing errors and interacting comedication for all administered BDZ. Associated ADE and risk factors were validated in medical records.Among 53,081 patients contributing 495,813 patient-days BDZ were administered to 25,626 patients (48.3% on 115,150 patient-days (23.2%. We identified 3,372 patient-days (2.9% with comedication that inhibits BDZ metabolism, and 1,197 (1.0% with lorazepam administration in severe renal impairment. After validation we classified 134, 56, 12, and 3 cases involving lorazepam, zolpidem, midazolam and triazolam, respectively, as clinically relevant ME. Among those there were 23 cases with associated adverse drug events, including severe CNS-depression, falls with subsequent injuries and severe dyspnea. Causality for BDZ was formally assessed as 'possible' or 'probable' in 20 of those cases. Four cases with ME and associated severe ADE required administration of the BDZ antagonist flumazenil.BDZ use was remarkably high in the studied setting, frequently involved potential ME related to dosing, co-medication and comorbidities, and rarely cases with associated ADE. We propose the implementation of automated ME screening and validation for the prevention of BDZ-related ADE.

  13. Sodium Bicarbonate for Control of ICP: A Systematic Review.

    Science.gov (United States)

    Zeiler, Frederick A; Sader, Nicholas; West, Michael; Gillman, Lawrence M

    2018-01-01

    Our goal was to perform a systematic review of the literature on the use of intravenous sodium bicarbonate for intracranial pressure (ICP) reduction in patients with neurologic illness. Data sources: articles from MEDLINE, BIOSIS, EMBASE, Global Health, Scopus, Cochrane Library, the International Clinical Trials Registry Platform (inception to April 2015), reference lists of relevant articles, and gray literature were searched. 2 reviewers independently extracted data including population characteristics and treatment characteristics. The strength of evidence was adjudicated using both the Oxford and Grading of Recommendation Assessment Development and Education methodology. Our search strategy produced a total 559 citations. Three original articles were included in the review. There were 2 prospective studies, 1 randomized control trial and 1 single arm, and 1 retrospective case report.Across all studies there were a total of 19 patients studied, with 31 episodes of elevated ICP being treated. Twenty-one of those episodes were treated with sodium bicarbonate infusion, with the remaining 10 treated with hypertonic saline in a control model. All elevated ICP episodes treated with sodium bicarbonate solution demonstrated a significant drop in ICP, without an elevation of serum partial pressure of carbon dioxide. No significant complications were described. There currently exists Oxford level 4, Grading of Recommendation Assessment Development and Education D evidence to support an ICP reduction effect with intravenous sodium bicarbonate in TBI. No comments on its impact in other neuropathologic states, or on patient outcomes, can be made at this time.

  14. Brain mechanisms of self-control: A neurocognitive investigation of reward-based action control and error awareness

    NARCIS (Netherlands)

    Harsay, H.A.

    2014-01-01

    Motivation and the ability to detect errors are critical for the interaction with our environment. They provide us with the opportunity to engage in purposive, persistent and corrective behavior, and to take the consequences of our actions into account. Diminished motivation and error awareness have

  15. Study of systematic errors in the determination of total Hg levels in the range -5% in inorganic and organic matrices with two reliable spectrometrical determination procedures

    International Nuclear Information System (INIS)

    Kaiser, G.; Goetz, D.; Toelg, G.; Max-Planck-Institut fuer Metallforschung, Stuttgart; Knapp, G.; Maichin, B.; Spitzy, H.

    1978-01-01

    In the determiniation of Hg at ng/g and pg/g levels systematic errors are due to faults in the analytical methods such as intake, preparation and decomposition of a sample. The sources of these errors have been studied both with 203 Hg-radiotracer techniques and two multi-stage procedures developed for the determiniation of trace levels. The emission spectrometrie (OES-MIP) procedure includes incineration of the sample in a microwave induced oxygen plasma (MIP), the isolation and enrichment on a gold absorbent and its excitation in an argon plasma (MIP). The emitted Hg-radiation (253,7 nm) is evaluated photometrically with a semiconductor element. The detection limit of the OES-MIP procedure was found to be 0,01 ng, the coefficient of variation 5% for 1 ng Hg. The second procedure combines a semi-automated wet digestion method (HCLO 3 /HNO 3 ) with a reduction-aeration (ascorbic acid/SnCl 2 ), and the flameless atomic absorption technique (253,7 nm). The detection limit of this procedure was found to be 0,5 ng, the coefficient of variation 5% for 5 ng Hg. (orig.) [de

  16. A Case–Control Study Investigating Simulated Driving Errors in Ischemic Stroke and Subarachnoid Hemorrhage

    Directory of Open Access Journals (Sweden)

    Megan A. Hird

    2018-02-01

    Full Text Available BackgroundStroke can affect a variety of cognitive, perceptual, and motor abilities that are important for safe driving. Results of studies assessing post-stroke driving ability are quite variable in the areas and degree of driving impairment among patients. This highlights the need to consider clinical characteristics, including stroke subtype, when assessing driving performance.MethodsWe compared the simulated driving performance of 30 chronic stroke patients (>3 months, including 15 patients with ischemic stroke (IS and 15 patients with subarachnoid hemorrhage (SAH, and 20 age-matched controls. A preliminary analysis was performed, subdividing IS patients into right (n = 8 and left (n = 6 hemispheric lesions and SAH patients into middle cerebral artery (MCA, n = 5 and anterior communicating artery (n = 6 territory. A secondary analysis was conducted to investigate the cognitive correlates of driving.ResultsNine patients (30% exhibited impaired simulated driving performance, including four patients with IS (26.7% and five patients with SAH (33.3%. Both patients with IS (2.3 vs. 0.3, U = 76, p < 0.05 and SAH (1.5 vs. 0.3, U = 45, p < 0.001 exhibited difficulty with lane maintenance (% distance out of lane compared to controls. In addition, patients with IS exhibited difficulty with speed maintenance (% distance over speed limit; 8.9 vs. 4.1, U = 81, p < 0.05, whereas SAH patients exhibited difficulty with turning performance (total turning errors; 5.4 vs. 1.6, U = 39.5, p < 0.001. The Trail Making Test (TMT and Useful Field of View test were significantly associated with lane maintenance among patients with IS (rs > 0.6, p < 0.05. No cognitive tests showed utility among patients with SAH.ConclusionBoth IS and SAH exhibited difficulty with lane maintenance. Patients with IS additionally exhibited difficulty with speed maintenance, whereas SAH patients exhibited difficulty with turning

  17. Two-dimensional errors

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements

  18. Components of effective randomized controlled trials of hydrotherapy programs for fibromyalgia syndrome: A systematic review

    OpenAIRE

    Perraton, Luke; Machotka, Zuzana; Kumar, Saravana

    2009-01-01

    Luke Perraton, Zuzana Machotka, Saravana KumarInternational Centre for Allied Health Evidence, University of South Australia, Adelaide, South Australia, AustraliaAim: Previous systematic reviews have found hydrotherapy to be an effective management strategy for fibromyalgia syndrome (FMS). The aim of this systematic review was to summarize the components of hydrotherapy programs used in randomized controlled trials.Method: A systematic review of randomized controlled trials was conducted. Onl...

  19. Bayesian networks modeling for thermal error of numerical control machine tools

    Institute of Scientific and Technical Information of China (English)

    Xin-hua YAO; Jian-zhong FU; Zi-chen CHEN

    2008-01-01

    The interaction between the heat source location,its intensity,thermal expansion coefficient,the machine system configuration and the running environment creates complex thermal behavior of a machine tool,and also makes thermal error prediction difficult.To address this issue,a novel prediction method for machine tool thermal error based on Bayesian networks (BNs) was presented.The method described causal relationships of factors inducing thermal deformation by graph theory and estimated the thermal error by Bayesian statistical techniques.Due to the effective combination of domain knowledge and sampled data,the BN method could adapt to the change of running state of machine,and obtain satisfactory prediction accuracy.Ex-periments on spindle thermal deformation were conducted to evaluate the modeling performance.Experimental results indicate that the BN method performs far better than the least squares(LS)analysis in terms of modeling estimation accuracy.

  20. General Unified Integral Controller with Zero Steady-State Error for Single-Phase Grid-Connected Inverters

    DEFF Research Database (Denmark)

    Guo, Xiaoqiang; Guerrero, Josep M.

    2016-01-01

    Current regulation is crucial for operating single-phase grid-connected inverters. The challenge of the current controller is how to fast and precisely tracks the current with zero steady-state error. This paper proposes a novel feedback mechanism for the conventional PI controller. It allows...... done indicates that the widely used PR (P+Resonant) control is just a special case of the proposed control solution. The time-domain simulation in Matlab/Simulink and experimental results from a TMS320F2812 DSP based laboratory prototypes are in good agreement, which verify the effectiveness...

  1. Output regulation control for switched stochastic delay systems with dissipative property under error-dependent switching

    Science.gov (United States)

    Li, L. L.; Jin, C. L.; Ge, X.

    2018-01-01

    In this paper, the output regulation problem with dissipative property for a class of switched stochastic delay systems is investigated, based on an error-dependent switching law. Under the assumption that none subsystem is solvable for the problem, a sufficient condition is derived by structuring multiple Lyapunov-Krasovskii functionals with respect to multiple supply rates, via designing error feedback regulators. The condition is also established when dissipative property reduces to passive property. Finally, two numerical examples are given to demonstrate the feasibility and efficiency of the present method.

  2. Power and sample size calculations in the presence of phenotype errors for case/control genetic association studies

    Directory of Open Access Journals (Sweden)

    Finch Stephen J

    2005-04-01

    Full Text Available Abstract Background Phenotype error causes reduction in power to detect genetic association. We present a quantification of phenotype error, also known as diagnostic error, on power and sample size calculations for case-control genetic association studies between a marker locus and a disease phenotype. We consider the classic Pearson chi-square test for independence as our test of genetic association. To determine asymptotic power analytically, we compute the distribution's non-centrality parameter, which is a function of the case and control sample sizes, genotype frequencies, disease prevalence, and phenotype misclassification probabilities. We derive the non-centrality parameter in the presence of phenotype errors and equivalent formulas for misclassification cost (the percentage increase in minimum sample size needed to maintain constant asymptotic power at a fixed significance level for each percentage increase in a given misclassification parameter. We use a linear Taylor Series approximation for the cost of phenotype misclassification to determine lower bounds for the relative costs of misclassifying a true affected (respectively, unaffected as a control (respectively, case. Power is verified by computer simulation. Results Our major findings are that: (i the median absolute difference between analytic power with our method and simulation power was 0.001 and the absolute difference was no larger than 0.011; (ii as the disease prevalence approaches 0, the cost of misclassifying a unaffected as a case becomes infinitely large while the cost of misclassifying an affected as a control approaches 0. Conclusion Our work enables researchers to specifically quantify power loss and minimum sample size requirements in the presence of phenotype errors, thereby allowing for more realistic study design. For most diseases of current interest, verifying that cases are correctly classified is of paramount importance.

  3. Mining unstructured data to support requirements elicitation by using controlled vocabularies: A systematic mapping study

    Directory of Open Access Journals (Sweden)

    José L. Barros-Justo

    2015-01-01

    Full Text Available This paper presents a work-in- progress that deals with the asse ssment of the use of controlled vocabularies during the process es of requirements engineering, as a m eans to mine data from differen t sources (interviews, contracts, schemas and diagrams. By doi ng this the requirements description, analy sis and comprehension is facilit ated for both developers and end users. As a research methodolo gy, we decided to use a systematic mapping study covering the last fou rteen years (2000 - 2014. As far as we know, such studies have not yet been done; however, the cost incurred from errors in the requir ements elicitation phase is one of the problems that is most co mmonly reported by the practitioners. Our study includes data on the p rocesses of building the controlled vocabulary and assesses the productivity and quality. We are also interes ted in tools and techniques to classify and retrieve information. Our first findings suggest t hat this is an under-research area.

  4. Random and systematic sampling error when hooking fish to monitor skin fluke (Benedenia seriolae) and gill fluke (Zeuxapta seriolae) burden in Australian farmed yellowtail kingfish (Seriola lalandi).

    Science.gov (United States)

    Fensham, J R; Bubner, E; D'Antignana, T; Landos, M; Caraguel, C G B

    2018-05-01

    The Australian farmed yellowtail kingfish (Seriola lalandi, YTK) industry monitor skin fluke (Benedenia seriolae) and gill fluke (Zeuxapta seriolae) burden by pooling the fluke count of 10 hooked YTK. The random and systematic error of this sampling strategy was evaluated to assess potential impact on treatment decisions. Fluke abundance (fluke count per fish) in a study cage (estimated 30,502 fish) was assessed five times using the current sampling protocol and its repeatability was estimated the repeatability coefficient (CR) and the coefficient of variation (CV). Individual body weight, fork length, fluke abundance, prevalence, intensity (fluke count per infested fish) and density (fluke count per Kg of fish) were compared between 100 hooked and 100 seined YTK (assumed representative of the entire population) to estimate potential selection bias. Depending on the fluke species and age category, CR (expected difference in parasite count between 2 sampling iterations) ranged from 0.78 to 114 flukes per fish. Capturing YTK by hooking increased the selection of fish of a weight and length in the lowest 5th percentile of the cage (RR = 5.75, 95% CI: 2.06-16.03, P-value = 0.0001). These lower end YTK had on average an extra 31 juveniles and 6 adults Z. seriolae per Kg of fish and an extra 3 juvenile and 0.4 adult B. seriolae per Kg of fish, compared to the rest of the cage population (P-value sampling towards the smallest and most heavily infested fish in the population, resulting in poor repeatability (more variability amongst sampled fish) and an overestimation of parasite burden in the population. In this particular commercial situation these finding supported that health management program, where the finding of an underestimation of parasite burden could provide a production impact on the study population. In instances where fish populations and parasite burdens are more homogenous, sampling error may be less severe. Sampling error when capturing fish

  5. Reducing Check-in Errors at Brigham Young University through Statistical Process Control

    Science.gov (United States)

    Spackman, N. Andrew

    2005-01-01

    The relationship between the library and its patrons is damaged and the library's reputation suffers when returned items are not checked in. An informal survey reveals librarians' concern for this problem and their efforts to combat it, although few libraries collect objective measurements of errors or the effects of improvement efforts. Brigham…

  6. Systematic Desensitization as Training in Self-Control

    Science.gov (United States)

    Goldfried, Marvin R.

    1971-01-01

    A description of a mediational model to explain the effectiveness of desensitization and a discussion of the available corroborative research findings for this alternative explanation are given. Also, specific procedural modifications for systematic desensitization are suggested. (Author)

  7. Feedback error learning controller for functional electrical stimulation assistance in a hybrid robotic system for reaching rehabilitation

    Directory of Open Access Journals (Sweden)

    Francisco Resquín

    2016-07-01

    Full Text Available Hybrid robotic systems represent a novel research field, where functional electrical stimulation (FES is combined with a robotic device for rehabilitation of motor impairment. Under this approach, the design of robust FES controllers still remains an open challenge. In this work, we aimed at developing a learning FES controller to assist in the performance of reaching movements in a simple hybrid robotic system setting. We implemented a Feedback Error Learning (FEL control strategy consisting of a feedback PID controller and a feedforward controller based on a neural network. A passive exoskeleton complemented the FES controller by compensating the effects of gravity. We carried out experiments with healthy subjects to validate the performance of the system. Results show that the FEL control strategy is able to adjust the FES intensity to track the desired trajectory accurately without the need of a previous mathematical model.

  8. A statistical approach to estimating effects of performance shaping factors on human error probabilities of soft controls

    International Nuclear Information System (INIS)

    Kim, Yochan; Park, Jinkyun; Jung, Wondea; Jang, Inseok; Hyun Seong, Poong

    2015-01-01

    Despite recent efforts toward data collection for supporting human reliability analysis, there remains a lack of empirical basis in determining the effects of performance shaping factors (PSFs) on human error probabilities (HEPs). To enhance the empirical basis regarding the effects of the PSFs, a statistical methodology using a logistic regression and stepwise variable selection was proposed, and the effects of the PSF on HEPs related with the soft controls were estimated through the methodology. For this estimation, more than 600 human error opportunities related to soft controls in a computerized control room were obtained through laboratory experiments. From the eight PSF surrogates and combinations of these variables, the procedure quality, practice level, and the operation type were identified as significant factors for screen switch and mode conversion errors. The contributions of these significant factors to HEPs were also estimated in terms of a multiplicative form. The usefulness and limitation of the experimental data and the techniques employed are discussed herein, and we believe that the logistic regression and stepwise variable selection methods will provide a way to estimate the effects of PSFs on HEPs in an objective manner. - Highlights: • It is necessary to develop an empirical basis for the effects of the PSFs on the HEPs. • A statistical method using a logistic regression and variable selection was proposed. • The effects of PSFs on the HEPs of soft controls were empirically investigated. • The significant factors were identified and their effects were estimated

  9. Controlling the error on target motion through real-time mesh adaptation: Applications to deep brain stimulation.

    Science.gov (United States)

    Bui, Huu Phuoc; Tomar, Satyendra; Courtecuisse, Hadrien; Audette, Michel; Cotin, Stéphane; Bordas, Stéphane P A

    2018-05-01

    An error-controlled mesh refinement procedure for needle insertion simulations is presented. As an example, the procedure is applied for simulations of electrode implantation for deep brain stimulation. We take into account the brain shift phenomena occurring when a craniotomy is performed. We observe that the error in the computation of the displacement and stress fields is localised around the needle tip and the needle shaft during needle insertion simulation. By suitably and adaptively refining the mesh in this region, our approach enables to control, and thus to reduce, the error whilst maintaining a coarser mesh in other parts of the domain. Through academic and practical examples we demonstrate that our adaptive approach, as compared with a uniform coarse mesh, increases the accuracy of the displacement and stress fields around the needle shaft and, while for a given accuracy, saves computational time with respect to a uniform finer mesh. This facilitates real-time simulations. The proposed methodology has direct implications in increasing the accuracy, and controlling the computational expense of the simulation of percutaneous procedures such as biopsy, brachytherapy, regional anaesthesia, or cryotherapy. Moreover, the proposed approach can be helpful in the development of robotic surgeries because the simulation taking place in the control loop of a robot needs to be accurate, and to occur in real time. Copyright © 2018 John Wiley & Sons, Ltd.

  10. Measurement error potential and control when quantifying volatile hydrocarbon concentrations in soils

    International Nuclear Information System (INIS)

    Siegrist, R.L.

    1991-01-01

    Due to their widespread use throughout commerce and industry, volatile hydrocarbons such as toluene, trichloroethene, and 1, 1,1-trichloroethane routinely appears as principal pollutants in contamination of soil system hydrocarbons is necessary to confirm the presence of contamination and its nature and extent; to assess site risks and the need for cleanup; to evaluate remedial technologies; and to verify the performance of a selected alternative. Decisions regarding these issues have far-reaching impacts and, ideally, should be based on accurate measurements of soil hydrocarbon concentrations. Unfortunately, quantification of volatile hydrocarbons in soils is extremely difficult and there is normally little understanding of the accuracy and precision of these measurements. Rather, the assumptions often implicitly made that the hydrocarbon data are sufficiently accurate for the intended purpose. This appear presents a discussion of measurement error potential when quantifying volatile hydrocarbons in soils, and outlines some methods for understanding the managing these errors

  11. ERROR-CONTROL CODING OF ADS-B MESSAGES FOR IRIDIUM SATELLITES

    Directory of Open Access Journals (Sweden)

    Volodymyr Kharchenko

    2013-12-01

    Full Text Available For modelling of ADS-B messages transmitting on the base of low-orbit satellite constellation Іrіdіum the model of a communication channel “Aircraft - Satellite - Ground Station” was built using MATLAB Sіmulіnk. This model allowed to investigate dependences of the Bit Error Rate on a type of  signal coding/decoding, ratio Eb/N0 and satellite repeater gain

  12. Robust a Posteriori Error Control and Adaptivity for Multiscale, Multinumerics, and Mortar Coupling

    KAUST Repository

    Pencheva, Gergina V.

    2013-01-01

    We consider discretizations of a model elliptic problem by means of different numerical methods applied separately in different subdomains, termed multinumerics, coupled using the mortar technique. The grids need not match along the interfaces. We are also interested in the multiscale setting, where the subdomains are partitioned by a mesh of size h, whereas the interfaces are partitioned by a mesh of much coarser size H, and where lower-order polynomials are used in the subdomains and higher-order polynomials are used on the mortar interface mesh. We derive several fully computable a posteriori error estimates which deliver a guaranteed upper bound on the error measured in the energy norm. Our estimates are also locally efficient and one of them is robust with respect to the ratio H/h under an assumption of sufficient regularity of the weak solution. The present approach allows bounding separately and comparing mutually the subdomain and interface errors. A subdomain/interface adaptive refinement strategy is proposed and numerically tested. © 2013 Society for Industrial and Applied Mathematics.

  13. A parallel row-based algorithm with error control for standard-cell replacement on a hypercube multiprocessor

    Science.gov (United States)

    Sargent, Jeff Scott

    1988-01-01

    A new row-based parallel algorithm for standard-cell placement targeted for execution on a hypercube multiprocessor is presented. Key features of this implementation include a dynamic simulated-annealing schedule, row-partitioning of the VLSI chip image, and two novel new approaches to controlling error in parallel cell-placement algorithms; Heuristic Cell-Coloring and Adaptive (Parallel Move) Sequence Control. Heuristic Cell-Coloring identifies sets of noninteracting cells that can be moved repeatedly, and in parallel, with no buildup of error in the placement cost. Adaptive Sequence Control allows multiple parallel cell moves to take place between global cell-position updates. This feedback mechanism is based on an error bound derived analytically from the traditional annealing move-acceptance profile. Placement results are presented for real industry circuits and the performance is summarized of an implementation on the Intel iPSC/2 Hypercube. The runtime of this algorithm is 5 to 16 times faster than a previous program developed for the Hypercube, while producing equivalent quality placement. An integrated place and route program for the Intel iPSC/2 Hypercube is currently being developed.

  14. Analysis and reduction of 3D systematic and random setup errors during the simulation and treatment of lung cancer patients with CT-based external beam radiotherapy dose planning.

    NARCIS (Netherlands)

    Boer, H.D. de; Sornsen de Koste, J.R. van; Senan, S.; Visser, A.G.; Heijmen, B.J.M.

    2001-01-01

    PURPOSE: To determine the magnitude of the errors made in (a) the setup of patients with lung cancer on the simulator relative to their intended setup with respect to the planned treatment beams and (b) in the setup of these patients on the treatment unit. To investigate how the systematic component

  15. Human errors and mistakes

    International Nuclear Information System (INIS)

    Wahlstroem, B.

    1993-01-01

    Human errors have a major contribution to the risks for industrial accidents. Accidents have provided important lesson making it possible to build safer systems. In avoiding human errors it is necessary to adapt the systems to their operators. The complexity of modern industrial systems is however increasing the danger of system accidents. Models of the human operator have been proposed, but the models are not able to give accurate predictions of human performance. Human errors can never be eliminated, but their frequency can be decreased by systematic efforts. The paper gives a brief summary of research in human error and it concludes with suggestions for further work. (orig.)

  16. Resistive wall mode feedback control in EXTRAP T2R with improved steady-state error and transient response

    Science.gov (United States)

    Brunsell, P. R.; Olofsson, K. E. J.; Frassinetti, L.; Drake, J. R.

    2007-10-01

    Experiments in the EXTRAP T2R reversed field pinch [P. R. Brunsell, H. Bergsåker, M. Cecconello et al., Plasma Phys. Control. Fusion 43, 1457 (2001)] on feedback control of m =1 resistive wall modes (RWMs) are compared with simulations using the cylindrical linear magnetohydrodynamic model, including the dynamics of the active coils and power amplifiers. Stabilization of the main RWMs (n=-11,-10,-9,-8,+5,+6) is shown using modest loop gains of the order G ˜1. However, other marginally unstable RWMs (n=-2,-1,+1,+2) driven by external field errors are only partially canceled at these gains. The experimental system stability limit is confirmed by simulations showing that the latency of the digital controller ˜50μs is degrading the system gain margin. The transient response is improved with a proportional-plus-derivative controller, and steady-state error is improved with a proportional-plus-integral controller. Suppression of all modes is obtained at high gain G ˜10 using a proportional-plus-integral-plus-derivative controller.

  17. A systematic framework for design of process monitoring and control (PAT) systems for crystallization processes

    DEFF Research Database (Denmark)

    Abdul Samad, Noor Asma Fazli Bin; Sin, Gürkan; Gernaey, Krist

    2013-01-01

    A generic computer-aided framework for systematic design of a process monitoring and control system for crystallization processes has been developed to study various aspects of crystallization operations.The systematic design framework contains a generic crystallizer modelling toolbox, a tool for...

  18. A Low-Cost Environmental Monitoring System: How to Prevent Systematic Errors in the Design Phase through the Combined Use of Additive Manufacturing and Thermographic Techniques

    Directory of Open Access Journals (Sweden)

    Francesco Salamone

    2017-04-01

    Full Text Available nEMoS (nano Environmental Monitoring System is a 3D-printed device built following the Do-It-Yourself (DIY approach. It can be connected to the web and it can be used to assess indoor environmental quality (IEQ. It is built using some low-cost sensors connected to an Arduino microcontroller board. The device is assembled in a small-sized case and both thermohygrometric sensors used to measure the air temperature and relative humidity, and the globe thermometer used to measure the radiant temperature, can be subject to thermal effects due to overheating of some nearby components. A thermographic analysis was made to rule out this possibility. The paper shows how the pervasive technique of additive manufacturing can be combined with the more traditional thermographic techniques to redesign the case and to verify the accuracy of the optimized system in order to prevent instrumental systematic errors in terms of the difference between experimental and actual values of the above-mentioned environmental parameters.

  19. Evaluation of Stability of Complexes of Inner Transition Metal Ions with 2-Oxo-1-pyrrolidine Acetamide and Role of Systematic Errors

    Directory of Open Access Journals (Sweden)

    Sangita Sharma

    2011-01-01

    Full Text Available BEST FIT models were used to study the complexation of inner transition metal ions like Y(III, La(III, Ce(III, Pr(III, Nd(III, Sm(III, Gd(III, Dy(III and Th(IV with 2-oxo-1-pyrrolidine acetamide at 30 °C in 10%, 20, 30, 40, 50% and 60% v/v dioxane-water mixture at 0.2 M ionic strength. Irving Rossotti titration method was used to get titration data. Calculations were carried out with PKAS and BEST Fortran IV computer programs. The expected species like L, LH+, ML, ML2 and ML(OH3, were obtained with SPEPLOT. Stability of complexes has increased with increasing the dioxane content. The observed change in stability can be explained on the basis of electrostatic effects, non electrostatic effects, solvating power of solvent mixture, interaction between ions and interaction of ions with solvents. Effect of systematic errors like effect of dissolved carbon dioxide, concentration of alkali, concentration of acid, concentration of ligand and concentration of metal have also been explained here.

  20. A Low-Cost Environmental Monitoring System: How to Prevent Systematic Errors in the Design Phase through the Combined Use of Additive Manufacturing and Thermographic Techniques.

    Science.gov (United States)

    Salamone, Francesco; Danza, Ludovico; Meroni, Italo; Pollastro, Maria Cristina

    2017-04-11

    nEMoS (nano Environmental Monitoring System) is a 3D-printed device built following the Do-It-Yourself (DIY) approach. It can be connected to the web and it can be used to assess indoor environmental quality (IEQ). It is built using some low-cost sensors connected to an Arduino microcontroller board. The device is assembled in a small-sized case and both thermohygrometric sensors used to measure the air temperature and relative humidity, and the globe thermometer used to measure the radiant temperature, can be subject to thermal effects due to overheating of some nearby components. A thermographic analysis was made to rule out this possibility. The paper shows how the pervasive technique of additive manufacturing can be combined with the more traditional thermographic techniques to redesign the case and to verify the accuracy of the optimized system in order to prevent instrumental systematic errors in terms of the difference between experimental and actual values of the above-mentioned environmental parameters.

  1. Projective Synchronization of N-Dimensional Chaotic Fractional-Order Systems via Linear State Error Feedback Control

    Directory of Open Access Journals (Sweden)

    Baogui Xin

    2012-01-01

    Full Text Available Based on linear feedback control technique, a projective synchronization scheme of N-dimensional chaotic fractional-order systems is proposed, which consists of master and slave fractional-order financial systems coupled by linear state error variables. It is shown that the slave system can be projectively synchronized with the master system constructed by state transformation. Based on the stability theory of linear fractional order systems, a suitable controller for achieving synchronization is designed. The given scheme is applied to achieve projective synchronization of chaotic fractional-order financial systems. Numerical simulations are given to verify the effectiveness of the proposed projective synchronization scheme.

  2. Systematic design of an optimal control system for the SHARON-Anammox process

    DEFF Research Database (Denmark)

    Valverde Perez, Borja; Mauricio Iglesias, Miguel; Sin, Gürkan

    2016-01-01

    A systematic design of an optimal control structure for the SHARON-Anammox nitrogen removal process is studied. The methodology incorporates two novel features to assess the controllability of the design variables candidate for the regulatory control layer: (i) H- control method, which formulates...

  3. A Self-Control Versus a Counterconditioning Paradigm for Systematic Desensitization: An Experimental Comparison

    Science.gov (United States)

    Spiegler, Michael D.; And Others

    1976-01-01

    A comparison was made between the traditional counterconditioning paradigm and a self-control paradigm of systematic desensitization. College students reporting high test anxiety and indicating interest in receiving treatment were assigned to counterconditioning, self-control, or wait-list control conditions. As predicted, self-control procedures…

  4. Human error and the associated recovery probabilities for soft control being used in the advanced MCRs of NPPs

    International Nuclear Information System (INIS)

    Jang, Inseok; Jung, Wondea; Seong, Poong Hyun

    2016-01-01

    Highlights: • The operation environment of MCRs in NPPs has changed by adopting digital HSIs. • Most current HRA databases are not explicitly designed to deal with digital HSI. • Empirical analysis for new HRA DB under an advanced MCR mockup are carried. • It is expected that the results can be used for advanced MCR HRA. - Abstract: Since the Three Mile Island (TMI)-2 accident, human error has been recognized as one of the main causes of Nuclear Power Plant (NPP) accidents, and numerous studies related to Human Reliability Analysis (HRA) have been carried out. Most of these studies were focused on considering the conventional Main Control Room (MCR) environment. However, the operating environment of MCRs in NPPs has changed with the adoption of new human-system interfaces (HSI) largely based on up-to-date digital technologies. The MCRs that include these digital and computer technologies, such as large display panels, computerized procedures, and soft controls, are called advanced MCRs. Among the many features of advanced MCRs, soft controls are a particularly important because operating actions in advanced MCRs are performed by soft control. Due to the difference in interfaces between soft control and hardwired conventional controls, different HEP should be used in the HRA for advanced MCRs. Unfortunately, most current HRA databases deal with operations in conventional MCRs and are not explicitly designed to deal with digital Human System Interface (HSI). For this reason, empirical human error and the associated error recovery probabilities were collected from the mockup of an advanced MCR equipped with soft controls. To this end, small-scaled experiments are conducted with 48 graduated students in the department of nuclear engineering in Korea Advanced Institute of Science and Technology (KAIST) are participated, and accident scenarios are designed with respect to the typical Design Basis Accidents (DBAs) in NPPs, such as Steam Generator Tube Rupture

  5. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  6. Accuracy Improvement of Multi-Axis Systems Based on Laser Correction of Volumetric Geometric Errors

    Science.gov (United States)

    Teleshevsky, V. I.; Sokolov, V. A.; Pimushkin, Ya I.

    2018-04-01

    The article describes a volumetric geometric errors correction method for CNC- controlled multi-axis systems (machine-tools, CMMs etc.). The Kalman’s concept of “Control and Observation” is used. A versatile multi-function laser interferometer is used as Observer in order to measure machine’s error functions. A systematic error map of machine’s workspace is produced based on error functions measurements. The error map results into error correction strategy. The article proposes a new method of error correction strategy forming. The method is based on error distribution within machine’s workspace and a CNC-program postprocessor. The postprocessor provides minimal error values within maximal workspace zone. The results are confirmed by error correction of precision CNC machine-tools.

  7. An Enhanced Intelligent Handheld Instrument with Visual Servo Control for 2-DOF Hand Motion Error Compensation

    Directory of Open Access Journals (Sweden)

    Yan Naing Aye

    2013-10-01

    Full Text Available The intelligent handheld instrument, ITrem2, enhances manual positioning accuracy by cancelling erroneous hand movements and, at the same time, provides automatic micromanipulation functions. Visual data is acquired from a high speed monovision camera attached to the optical surgical microscope and acceleration measurements are acquired from the inertial measurement unit (IMU on board ITrem2. Tremor estimation and canceling is implemented via Band-limited Multiple Fourier Linear Combiner (BMFLC filter. The piezoelectric actuated micromanipulator in ITrem2 generates the 3D motion to compensate erroneous hand motion. Preliminary bench-top 2-DOF experiments have been conducted. The error motions simulated by a motion stage is reduced by 67% for multiple frequency oscillatory motions and 56.16% for pre-conditioned recorded physiological tremor.

  8. Uncertainty quantification and error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  9. Active control of aircraft engine inlet noise using compact sound sources and distributed error sensors

    Science.gov (United States)

    Burdisso, Ricardo (Inventor); Fuller, Chris R. (Inventor); O'Brien, Walter F. (Inventor); Thomas, Russell H. (Inventor); Dungan, Mary E. (Inventor)

    1996-01-01

    An active noise control system using a compact sound source is effective to reduce aircraft engine duct noise. The fan noise from a turbofan engine is controlled using an adaptive filtered-x LMS algorithm. Single multi channel control systems are used to control the fan blade passage frequency (BPF) tone and the BPF tone and the first harmonic of the BPF tone for a plane wave excitation. A multi channel control system is used to control any spinning mode. The multi channel control system to control both fan tones and a high pressure compressor BPF tone simultaneously. In order to make active control of turbofan inlet noise a viable technology, a compact sound source is employed to generate the control field. This control field sound source consists of an array of identical thin, cylindrically curved panels with an inner radius of curvature corresponding to that of the engine inlet. These panels are flush mounted inside the inlet duct and sealed on all edges to prevent leakage around the panel and to minimize the aerodynamic losses created by the addition of the panels. Each panel is driven by one or more piezoelectric force transducers mounted on the surface of the panel. The response of the panel to excitation is maximized when it is driven at its resonance; therefore, the panel is designed such that its fundamental frequency is near the tone to be canceled, typically 2000-4000 Hz.

  10. A Real-Time Accurate Model and Its Predictive Fuzzy PID Controller for Pumped Storage Unit via Error Compensation

    Directory of Open Access Journals (Sweden)

    Jianzhong Zhou

    2017-12-01

    Full Text Available Model simulation and control of pumped storage unit (PSU are essential to improve the dynamic quality of power station. Only under the premise of the PSU models reflecting the actual transient process, the novel control method can be properly applied in the engineering. The contributions of this paper are that (1 a real-time accurate equivalent circuit model (RAECM of PSU via error compensation is proposed to reconcile the conflict between real-time online simulation and accuracy under various operating conditions, and (2 an adaptive predicted fuzzy PID controller (APFPID based on RAECM is put forward to overcome the instability of conventional control under no-load conditions with low water head. Respectively, all hydraulic factors in pipeline system are fully considered based on equivalent lumped-circuits theorem. The pretreatment, which consists of improved Suter-transformation and BP neural network, and online simulation method featured by two iterative loops are synthetically proposed to improve the solving accuracy of the pump-turbine. Moreover, the modified formulas for compensating error are derived with variable-spatial discretization to improve the accuracy of the real-time simulation further. The implicit RadauIIA method is verified to be more suitable for PSUGS owing to wider stable domain. Then, APFPID controller is constructed based on the integration of fuzzy PID and the model predictive control. Rolling prediction by RAECM is proposed to replace rolling optimization with its computational speed guaranteed. Finally, the simulation and on-site measurements are compared to prove trustworthy of RAECM under various running conditions. Comparative experiments also indicate that APFPID controller outperforms other controllers in most cases, especially low water head conditions. Satisfying results of RAECM have been achieved in engineering and it provides a novel model reference for PSUGS.

  11. Force to Rebalance Control of HRG and Suppression of Its Errors on the Basis of FPGA

    Directory of Open Access Journals (Sweden)

    Qingan Jiang

    2011-12-01

    Full Text Available A novel design of force to rebalance control for a hemispherical resonator gyro (HRG based on FPGA is demonstrated in this paper. The proposed design takes advantage of the automatic gain control loop and phase lock loop configuration in the drive mode while making full use of the quadrature control loop and rebalance control loop in controlling the oscillating dynamics in the sense mode. First, the math model of HRG with inhomogeneous damping and frequency split is theoretically analyzed. In addition, the major drift mechanisms in the HRG are described and the methods that can suppress the gyro drift are mentioned. Based on the math model and drift mechanisms suppression method, four control loops are employed to realize the manipulation of the HRG by using a FPGA circuit. The reference-phase loop and amplitude control loop are used to maintain the vibration of primary mode at its natural frequency with constant amplitude. The frequency split is readily eliminated by the quadrature loop with a DC voltage feedback from the quadrature component of the node. The secondary mode response to the angle rate input is nullified by the rebalance control loop. In order to validate the effect of the digital control of HRG, experiments are carried out with a turntable. The experimental results show that the design is suitable for the control of HRG which has good linearity scale factor and bias stability.

  12. Force to rebalance control of HRG and suppression of its errors on the basis of FPGA.

    Science.gov (United States)

    Wang, Xu; Wu, Wenqi; Luo, Bing; Fang, Zhen; Li, Yun; Jiang, Qingan

    2011-01-01

    A novel design of force to rebalance control for a hemispherical resonator gyro (HRG) based on FPGA is demonstrated in this paper. The proposed design takes advantage of the automatic gain control loop and phase lock loop configuration in the drive mode while making full use of the quadrature control loop and rebalance control loop in controlling the oscillating dynamics in the sense mode. First, the math model of HRG with inhomogeneous damping and frequency split is theoretically analyzed. In addition, the major drift mechanisms in the HRG are described and the methods that can suppress the gyro drift are mentioned. Based on the math model and drift mechanisms suppression method, four control loops are employed to realize the manipulation of the HRG by using a FPGA circuit. The reference-phase loop and amplitude control loop are used to maintain the vibration of primary mode at its natural frequency with constant amplitude. The frequency split is readily eliminated by the quadrature loop with a DC voltage feedback from the quadrature component of the node. The secondary mode response to the angle rate input is nullified by the rebalance control loop. In order to validate the effect of the digital control of HRG, experiments are carried out with a turntable. The experimental results show that the design is suitable for the control of HRG which has good linearity scale factor and bias stability.

  13. A Benchmark Study on Error Assessment and Quality Control of CCS Reads Derived from the PacBio RS.

    Science.gov (United States)

    Jiao, Xiaoli; Zheng, Xin; Ma, Liang; Kutty, Geetha; Gogineni, Emile; Sun, Qiang; Sherman, Brad T; Hu, Xiaojun; Jones, Kristine; Raley, Castle; Tran, Bao; Munroe, David J; Stephens, Robert; Liang, Dun; Imamichi, Tomozumi; Kovacs, Joseph A; Lempicki, Richard A; Huang, Da Wei

    2013-07-31

    PacBio RS, a newly emerging third-generation DNA sequencing platform, is based on a real-time, single-molecule, nano-nitch sequencing technology that can generate very long reads (up to 20-kb) in contrast to the shorter reads produced by the first and second generation sequencing technologies. As a new platform, it is important to assess the sequencing error rate, as well as the quality control (QC) parameters associated with the PacBio sequence data. In this study, a mixture of 10 prior known, closely related DNA amplicons were sequenced using the PacBio RS sequencing platform. After aligning Circular Consensus Sequence (CCS) reads derived from the above sequencing experiment to the known reference sequences, we found that the median error rate was 2.5% without read QC, and improved to 1.3% with an SVM based multi-parameter QC method. In addition, a De Novo assembly was used as a downstream application to evaluate the effects of different QC approaches. This benchmark study indicates that even though CCS reads are post error-corrected it is still necessary to perform appropriate QC on CCS reads in order to produce successful downstream bioinformatics analytical results.

  14. Edge placement error control and Mask3D effects in High-NA anamorphic EUV lithography

    Science.gov (United States)

    van Setten, Eelco; Bottiglieri, Gerardo; de Winter, Laurens; McNamara, John; Rusu, Paul; Lubkoll, Jan; Rispens, Gijsbert; van Schoot, Jan; Neumann, Jens Timo; Roesch, Matthias; Kneer, Bernhard

    2017-10-01

    To enable cost-effective shrink at the 3nm node and beyond, and to extend Moore's law into the next decade, ASML is developing a new high-NA EUV platform. The high-NA system is targeted to feature a numerical aperture (NA) of 0.55 to extend the single exposure resolution limit to 8nm half pitch. The system is being designed to achieve an on-product-overlay (OPO) performance well below 2nm, a high image contrast to drive down local CD errors and to obtain global CDU at sub-1nm level to be able to meet customer edge placement error (EPE) requirements for the devices of the future. EUV scanners employ reflective Bragg multi-layer mirrors in the mask and in the Projection Optics Box (POB) that is used to project the mask pattern into the photoresist on the silicon wafer. These MoSi multi-layer mirrors are tuned for maximum reflectivity, and thus productivity, at 13.5nm wavelength. The angular range of incident light for which a high reflectivity at the reticle can be obtained is limited to +/- 11o, exceeding the maximum angle occurring in current 0.33NA scanners at 4x demagnification. At 0.55NA the maximum angle at reticle level would extend up to 17o in the critical (scanning) direction and compromise the imaging performance of horizontal features severely. To circumvent this issue a novel anamorphic optics design has been introduced, which has a 4x demagnification in the X- (slit) direction and 8x demagnification in the Y- (scanning) direction as well as a central obscuration in the exit pupil. In this work we will show that the EUV high-NA anamorphic concept can successfully solve the angular reflectivity issues and provide good imaging performance in both directions. Several unique imaging challenges in comparison to the 0.33NA isomorphic baseline are being studied, such as the impact of the central obscuration in the POB and Mask-3D effects at increased NA that seem most pronounced for vertical features. These include M3D induced contrast loss and non

  15. DISTURBANCE ERROR INVARIANCE IN AUTOMATIC CONTROL SYSTEMS FOR TECHNOLOGICAL OBJECT TRAJECTORY MOVEMENT

    Directory of Open Access Journals (Sweden)

    A. V. Lekareva

    2016-09-01

    Full Text Available We consider combined control in automatic control systems for technological objects trajectory movements. We present research results of the system disturbance invariance ensuring on the example of the technological manipulator that implements hydrocutting of the oil pipelines. Control is based on the propositions of the fourth modified invariance form with the use of bootstrapping methods. The paper presents analysis of results obtained by two different correction methods. The essence of the first method lies in injection of additional component into the already established control signal and formation of the channel for that component. Control signal correction during the signal synthesis stage in the control device constitutes the basis for the second method. Research results have shown high efficiency of application for both correction methods. Both methods have roughly the same precision. We have shown that the correction in the control device is preferable because it has no influence on the inner contour of the system. We have shown the necessity of the block usage with the variable transmission coefficient, which value is determined by technological trajectory parameters. Research results can be applied in practice for improvement of the precision specifications of automatic control systems for trajectorial manipulators.

  16. An audit strategy for time-to-event outcomes measured with error: application to five randomized controlled trials in oncology.

    Science.gov (United States)

    Dodd, Lori E; Korn, Edward L; Freidlin, Boris; Gu, Wenjuan; Abrams, Jeffrey S; Bushnell, William D; Canetta, Renzo; Doroshow, James H; Gray, Robert J; Sridhara, Rajeshwari

    2013-10-01

    Measurement error in time-to-event end points complicates interpretation of treatment effects in clinical trials. Non-differential measurement error is unlikely to produce large bias [1]. When error depends on treatment arm, bias is of greater concern. Blinded-independent central review (BICR) of all images from a trial is commonly undertaken to mitigate differential measurement-error bias that may be present in hazard ratios (HRs) based on local evaluations. Similar BICR and local evaluation HRs may provide reassurance about the treatment effect, but BICR adds considerable time and expense to trials. We describe a BICR audit strategy [2] and apply it to five randomized controlled trials to evaluate its use and to provide practical guidelines. The strategy requires BICR on a subset of study subjects, rather than a complete-case BICR, and makes use of an auxiliary-variable estimator. When the effect size is relatively large, the method provides a substantial reduction in the size of the BICRs. In a trial with 722 participants and a HR of 0.48, an average audit of 28% of the data was needed and always confirmed the treatment effect as assessed by local evaluations. More moderate effect sizes and/or smaller trial sizes required larger proportions of audited images, ranging from 57% to 100% for HRs ranging from 0.55 to 0.77 and sample sizes between 209 and 737. The method is developed for a simple random sample of study subjects. In studies with low event rates, more efficient estimation may result from sampling individuals with events at a higher rate. The proposed strategy can greatly decrease the costs and time associated with BICR, by reducing the number of images undergoing review. The savings will depend on the underlying treatment effect and trial size, with larger treatment effects and larger trials requiring smaller proportions of audited data.

  17. A systematic approach for fine-tuning of fuzzy controllers applied to WWTPs

    DEFF Research Database (Denmark)

    Ruano, M.V.; Ribes, J.; Sin, Gürkan

    2010-01-01

    A systematic approach for fine-tuning fuzzy controllers has been developed and evaluated for an aeration control system implemented in a WWTR The challenge with the application of fuzzy controllers to WWTPs is simply that they contain many parameters, which need to be adjusted for different WWTP ...

  18. Fiducial registration error as a statistical process control metric in image-guided radiotherapy with prostatic markers

    International Nuclear Information System (INIS)

    Ung, M.N.; Wee, Leonard

    2010-01-01

    Full text: Portal imaging of implanted fiducial markers has been in use for image-guided radiotherapy (TORT) of prostate cancer, with ample attention to localization accuracy and organ motion. The geometric uncertainties in point-based rigid-body (PBRB) image registration during localization of prostate fiducial markers can be quantified in terms of a fiducial registration error (FRE). Statistical process control charts for individual patients can be designed to identify potentially significant deviation of FRE from expected behaviour. In this study, the aim was to retrospectively apply statistical process control methods to FREs in 34 individuals to identify parameters that may impact on the process stability in image-based localization. A robust procedure for estimating control parameters, control lim its and fixed tolerance levels from a small number of initial observations has been proposed and discussed. Four distinct types of qualitative control chart behavior have been observed. Probable clinical factors leading to IORT process instability are discussed in light of the control chart behaviour. Control charts have been shown to be a useful decision-making tool for detecting potentially out-of control processes on an individual basis. It can sensitively identify potential problems that warrant more detailed investigation in the 10RT of prostate cancer.

  19. Design of robust adaptive controller and feedback error learning for rehabilitation in Parkinson's disease: a simulation study.

    Science.gov (United States)

    Rouhollahi, Korosh; Emadi Andani, Mehran; Karbassi, Seyed Mahdi; Izadi, Iman

    2017-02-01

    Deep brain stimulation (DBS) is an efficient therapy to control movement disorders of Parkinson's tremor. Stimulation of one area of basal ganglia (BG) by DBS with no feedback is the prevalent opinion. Reduction of additional stimulatory signal delivered to the brain is the advantage of using feedback. This results in reduction of side effects caused by the excessive stimulation intensity. In fact, the stimulatory intensity of controllers is decreased proportional to reduction of hand tremor. The objective of this study is to design a new controller structure to decrease three indicators: (i) the hand tremor; (ii) the level of delivered stimulation in disease condition; and (iii) the ratio of the level of delivered stimulation in health condition to disease condition. For this purpose, the authors offer a new closed-loop control structure to stimulate two areas of BG simultaneously. One area (STN: subthalamic nucleus) is stimulated by an adaptive controller with feedback error learning. The other area (GPi: globus pallidus internal) is stimulated by a partial state feedback (PSF) controller. Considering the three indicators, the results show that, stimulating two areas simultaneously leads to better performance compared with stimulating one area only. It is shown that both PSF and adaptive controllers are robust regarding system parameter uncertainties. In addition, a method is proposed to update the parameters of the BG model in real time. As a result, the parameters of the controllers can be updated based on the new parameters of the BG model.

  20. Biometrics based key management of double random phase encoding scheme using error control codes

    Science.gov (United States)

    Saini, Nirmala; Sinha, Aloka

    2013-08-01

    In this paper, an optical security system has been proposed in which key of the double random phase encoding technique is linked to the biometrics of the user to make it user specific. The error in recognition due to the biometric variation is corrected by encoding the key using the BCH code. A user specific shuffling key is used to increase the separation between genuine and impostor Hamming distance distribution. This shuffling key is then further secured using the RSA public key encryption to enhance the security of the system. XOR operation is performed between the encoded key and the feature vector obtained from the biometrics. The RSA encoded shuffling key and the data obtained from the XOR operation are stored into a token. The main advantage of the present technique is that the key retrieval is possible only in the simultaneous presence of the token and the biometrics of the user which not only authenticates the presence of the original input but also secures the key of the system. Computational experiments showed the effectiveness of the proposed technique for key retrieval in the decryption process by using the live biometrics of the user.

  1. Integration of process-oriented control with systematic inspection in FRAMATOME-FBFC fuel manufacturing

    International Nuclear Information System (INIS)

    Kopff, G.

    2000-01-01

    The classical approach to quality control is essentially based on final inspection of the product conducted through a qualified process. The main drawback of this approach lies in the separation and , therefore, in the low feedback between manufacturing and quality control, leading to a very static quality system. As a remedy, the modern approach to quality management focuses on the need for continuous improvement through process-oriented quality control. In the classical approach, high reliability of nuclear fuel and high quality level of the main characteristics are assumed to be attained, at the manufacturing step, through 100% inspection of the product, generally with automated inspection equipment. Such a 100% final inspection is not appropriate to obtain a homogeneous product with minimum variability, and cannot be a substitute for the SPC tools (Statistical Process Control) which are rightly designed with this aim. On the other hand, SPC methods, which detect process changes and are used to keep the process u nder control , leading to the optimal distribution of the quality characteristics, do not protect against non systematic or local disturbances, at low frequency. Only systematic control is capable of detecting local quality troubles. In fact, both approaches, SPC and systematic inspection, are complementary , because they are remedies for distinct causes of process and product changes. The term 'statistical' in the expression 'SPC' refers less to the sampling techniques than to the control of global distribution parameters of product or process variables (generally location and dispersion parameters). The successive integration levels of process control methods with systematic inspection are described and illustrated by examples from FRAMATOME-FBFC fuel manufacturing, from the simple control chart for checking the performance stability of automated inspection equipment to the global process control system including systematic inspection. This kind of

  2. Multi-Stage Optimization-Based Automatic Voltage Control Systems Considering Wind Power Forecasting Errors

    DEFF Research Database (Denmark)

    Qin, Nan; Bak, Claus Leth; Abildgaard, Hans

    2017-01-01

    This paper proposes an automatic voltage control (AVC) system for power systems with limited continuous voltage control capability. The objective is to minimize the operational cost over a period, which consists of the power loss in the grid, the shunt switching cost, the transformer tap change...... electricity control center, where study cases based on the western Danish power system demonstrate the superiority of the proposed AVC system in term of the cost minimization. Monte Carlo simulations are carried out to verify the proposed method on the robustness improvements....

  3. Analysis of Steady-State Error in Torque Current Component Control of PMSM Drive

    Directory of Open Access Journals (Sweden)

    BRANDSTETTER, P.

    2017-05-01

    Full Text Available The paper presents dynamic properties of a vector controlled permanent magnet synchronous motor drive supplied by a voltage source inverter. The paper deals with a control loop for the torque producing stator current. There is shown fundamental mathematical description for the vector control structure of the permanent magnet synchronous motor drive with respect to the current control for d-axis and q-axis of the rotor rotating coordinate system. The derivations of steady-state deviation for schemes with and without decoupling circuits are described for q-axis. The properties of both schemes are verified by MATLAB-SIMULINK program considering a lower and a higher value of inertia and by experimental measurements in our laboratory. The simulation and experimental results are presented and discussed at the end of the paper.

  4. Implementation of quality control systematics for personnel monitoring services

    International Nuclear Information System (INIS)

    Franco, J.O.A.

    1984-01-01

    The implementation of statistical quality control techniques used in industrial practise is proposed to dosimetric services. 'Control charts' and 'sampling inspection' are adapted respectively for control of measuring process and of dose results produced in routine. A chapter on Radiation Protection and Personnel Monitoring was included. (M.A.C.) [pt

  5. Failing to Forget: Prospective Memory Commission Errors Can Result from Spontaneous Retrieval and Impaired Executive Control

    Science.gov (United States)

    Scullin, Michael K.; Bugg, Julie M.

    2013-01-01

    Prospective memory (PM) research typically examines the ability to remember to execute delayed intentions but often ignores the ability to forget finished intentions. We had participants perform (or not perform; control group) a PM task and then instructed them that the PM task was finished. We later (re)presented the PM cue. Approximately 25% of…

  6. Communication: Toward an improved control of the fixed-node error in quantum Monte Carlo: The case of the water molecule

    International Nuclear Information System (INIS)

    Caffarel, Michel; Applencourt, Thomas; Scemama, Anthony; Giner, Emmanuel

    2016-01-01

    All-electron Fixed-node Diffusion Monte Carlo calculations for the nonrelativistic ground-state energy of the water molecule at equilibrium geometry are presented. The determinantal part of the trial wavefunction is obtained from a selected Configuration Interaction calculation [Configuration Interaction using a Perturbative Selection done Iteratively (CIPSI) method] including up to about 1.4 × 10 6 of determinants. Calculations are made using the cc-pCVnZ family of basis sets, with n = 2 to 5. In contrast with most quantum Monte Carlo works no re-optimization of the determinantal part in presence of a Jastrow is performed. For the largest cc-pCV5Z basis set the lowest upper bound for the ground-state energy reported so far of −76.437 44(18) is obtained. The fixed-node energy is found to decrease regularly as a function of the cardinal number n and the Complete Basis Set limit associated with exact nodes is easily extracted. The resulting energy of −76.438 94(12) — in perfect agreement with the best experimentally derived value — is the most accurate theoretical estimate reported so far. We emphasize that employing selected configuration interaction nodes of increasing quality in a given family of basis sets may represent a simple, deterministic, reproducible, and systematic way of controlling the fixed-node error in diffusion Monte Carlo.

  7. Communication: Toward an improved control of the fixed-node error in quantum Monte Carlo: The case of the water molecule

    Energy Technology Data Exchange (ETDEWEB)

    Caffarel, Michel; Applencourt, Thomas; Scemama, Anthony [Laboratoire de Chimie et Physique Quantique, CNRS-Université de Toulouse, Toulouse (France); Giner, Emmanuel [Dipartimento di Scienze Chimiche e Farmaceutiche, Universit degli Studi di Ferrara, Ferrara (Italy)

    2016-04-21

    All-electron Fixed-node Diffusion Monte Carlo calculations for the nonrelativistic ground-state energy of the water molecule at equilibrium geometry are presented. The determinantal part of the trial wavefunction is obtained from a selected Configuration Interaction calculation [Configuration Interaction using a Perturbative Selection done Iteratively (CIPSI) method] including up to about 1.4 × 10{sup 6} of determinants. Calculations are made using the cc-pCVnZ family of basis sets, with n = 2 to 5. In contrast with most quantum Monte Carlo works no re-optimization of the determinantal part in presence of a Jastrow is performed. For the largest cc-pCV5Z basis set the lowest upper bound for the ground-state energy reported so far of −76.437 44(18) is obtained. The fixed-node energy is found to decrease regularly as a function of the cardinal number n and the Complete Basis Set limit associated with exact nodes is easily extracted. The resulting energy of −76.438 94(12) — in perfect agreement with the best experimentally derived value — is the most accurate theoretical estimate reported so far. We emphasize that employing selected configuration interaction nodes of increasing quality in a given family of basis sets may represent a simple, deterministic, reproducible, and systematic way of controlling the fixed-node error in diffusion Monte Carlo.

  8. Sharing is caring, but not error free: transparency of granular controls for sharing personal health information in social networks.

    Science.gov (United States)

    Hartzler, Andrea; Skeels, Meredith M; Mukai, Marlee; Powell, Christopher; Klasnja, Predrag; Pratt, Wanda

    2011-01-01

    When patients share personal health information with family and friends, their social networks become better equipped to help them through serious health situations. Thus, patients need tools that enable granular control over what personal health information is shared and with whom within social networks. Yet, we know little about how well such tools support patients' complex sharing needs. We report on a lab study in which we examined the transparency of sharing interfaces that display an overview and details of information sharing with network connections in an internet-based personal health information management tool called HealthWeaver. Although participants found the interfaces easy to use and were highly confident in their interpretation of the sharing controls, several participants made errors in determining what information was shared with whom. Our findings point to the critical importance of future work that examines design of usable interfaces that offer transparent granularity in support of patients' complex information sharing practices.

  9. Error Patterns

    NARCIS (Netherlands)

    Hoede, C.; Li, Z.

    2001-01-01

    In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,

  10. Average beta-beating from random errors

    CERN Document Server

    Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department

    2018-01-01

    The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic effect on the tune.

  11. Neurometaplasticity: Glucoallostasis control of plasticity of the neural networks of error commission, detection, and correction modulates neuroplasticity to influence task precision

    Science.gov (United States)

    Welcome, Menizibeya O.; Dane, Şenol; Mastorakis, Nikos E.; Pereverzev, Vladimir A.

    2017-12-01

    The term "metaplasticity" is a recent one, which means plasticity of synaptic plasticity. Correspondingly, neurometaplasticity simply means plasticity of neuroplasticity, indicating that a previous plastic event determines the current plasticity of neurons. Emerging studies suggest that neurometaplasticity underlie many neural activities and neurobehavioral disorders. In our previous work, we indicated that glucoallostasis is essential for the control of plasticity of the neural network that control error commission, detection and correction. Here we review recent works, which suggest that task precision depends on the modulatory effects of neuroplasticity on the neural networks of error commission, detection, and correction. Furthermore, we discuss neurometaplasticity and its role in error commission, detection, and correction.

  12. Automatic parameter estimation of multicompartmental neuron models via minimization of trace error with control adjustment.

    Science.gov (United States)

    Brookings, Ted; Goeritz, Marie L; Marder, Eve

    2014-11-01

    We describe a new technique to fit conductance-based neuron models to intracellular voltage traces from isolated biological neurons. The biological neurons are recorded in current-clamp with pink (1/f) noise injected to perturb the activity of the neuron. The new algorithm finds a set of parameters that allows a multicompartmental model neuron to match the recorded voltage trace. Attempting to match a recorded voltage trace directly has a well-known problem: mismatch in the timing of action potentials between biological and model neuron is inevitable and results in poor phenomenological match between the model and data. Our approach avoids this by applying a weak control adjustment to the model to promote alignment during the fitting procedure. This approach is closely related to the control theoretic concept of a Luenberger observer. We tested this approach on synthetic data and on data recorded from an anterior gastric receptor neuron from the stomatogastric ganglion of the crab Cancer borealis. To test the flexibility of this approach, the synthetic data were constructed with conductance models that were different from the ones used in the fitting model. For both synthetic and biological data, the resultant models had good spike-timing accuracy. Copyright © 2014 the American Physiological Society.

  13. Quality of life, psychological adjustment, and adaptive functioning of patients with intoxication-type inborn errors of metabolism - a systematic review.

    Science.gov (United States)

    Zeltner, Nina A; Huemer, Martina; Baumgartner, Matthias R; Landolt, Markus A

    2014-10-25

    In recent decades, considerable progress in diagnosis and treatment of patients with intoxication-type inborn errors of metabolism (IT-IEM) such as urea cycle disorders (UCD), organic acidurias (OA), maple syrup urine disease (MSUD), or tyrosinemia type 1 (TYR 1) has resulted in a growing group of long-term survivors. However, IT-IEM still require intense patient and caregiver effort in terms of strict dietetic and pharmacological treatment, and the threat of metabolic crises is always present. Furthermore, crises can affect the central nervous system (CNS), leading to cognitive, behavioural and psychiatric sequelae. Consequently, the well-being of the patients warrants consideration from both a medical and a psychosocial viewpoint by assessing health-related quality of life (HrQoL), psychological adjustment, and adaptive functioning. To date, an overview of findings on these topics for IT-IEM is lacking. We therefore aimed to systematically review the research on HrQoL, psychological adjustment, and adaptive functioning in patients with IT-IEM. Relevant databases were searched with predefined keywords. Study selection was conducted in two steps based on predefined criteria. Two independent reviewers completed the selection and data extraction. Eleven articles met the inclusion criteria. Studies were of varying methodological quality and used different assessment measures. Findings on HrQoL were inconsistent, with some showing lower and others showing higher or equal HrQoL for IT-IEM patients compared to norms. Findings on psychological adjustment and adaptive functioning were more consistent, showing mostly either no difference or worse adjustment of IT-IEM patients compared to norms. Single medical risk factors for HrQoL, psychological adjustment, or adaptive functioning have been addressed, while psychosocial risk factors have not been addressed. Data on HrQoL, psychological adjustment, and adaptive functioning for IT-IEM are sparse. Studies are inconsistent in

  14. DSOGI-PLL Based Power Control Method to Mitigate Control Errors Under Disturbances of Grid Connected Hybrid Renewable Power Systems

    Directory of Open Access Journals (Sweden)

    Mehmet Emin Meral

    2018-01-01

    Full Text Available The control of power converter devices is one of the main research lines in interfaced renewable energy sources, such as solar cells and wind turbines. Therefore, suitable control algorithms should be designed in order to regulate power or current properly and attain a good power quality for some disturbances, such as voltage sag/swell, voltage unbalances and fluctuations, long interruptions, and harmonics. Various synchronisation techniques based control strategies are implemented for the hybrid power system applications under unbalanced conditions in literature studies. In this paper, synchronisation algorithms based Proportional-Resonant (PR power/current controller is applied to the hybrid power system (solar cell + wind turbine + grid, and Dual Second Order Generalized Integrator-Phase Locked Loop (DSOGI-PLL based PR controller in stationary reference frame provides a solution to overcome these problems. The influence of various cases, such as unbalance, and harmonic conditions, is examined, analysed and compared to the PR controllers based on DSOGI-PLL and SRF-PLL. The results verify the effectiveness and correctness of the proposed DSOGI-PLL based power control method.

  15. Developing a Systematic Corrosion Control Evaluation Approach in Flint

    Science.gov (United States)

    Presentation covers what the projects were that were recommended by the Flint Safe Drinking Water Task Force for corrosion control assessment for Flint, focusing on the sequential sampling project, the pipe rigs, and pipe scale analyses.

  16. Impact of Glucose Meter Error on Glycemic Variability and Time in Target Range During Glycemic Control After Cardiovascular Surgery.

    Science.gov (United States)

    Karon, Brad S; Meeusen, Jeffrey W; Bryant, Sandra C

    2015-08-25

    We retrospectively studied the impact of glucose meter error on the efficacy of glycemic control after cardiovascular surgery. Adult patients undergoing intravenous insulin glycemic control therapy after cardiovascular surgery, with 12-24 consecutive glucose meter measurements used to make insulin dosing decisions, had glucose values analyzed to determine glycemic variability by both standard deviation (SD) and continuous overall net glycemic action (CONGA), and percentage glucose values in target glucose range (110-150 mg/dL). Information was recorded for 70 patients during each of 2 periods, with different glucose meters used to measure glucose and dose insulin during each period but no other changes to the glycemic control protocol. Accuracy and precision of each meter were also compared using whole blood specimens from ICU patients. Glucose meter 1 (GM1) had median bias of 11 mg/dL compared to a laboratory reference method, while glucose meter 2 (GM2) had a median bias of 1 mg/dL. GM1 and GM2 differed little in precision (CV = 2.0% and 2.7%, respectively). Compared to the period when GM1 was used to make insulin dosing decisions, patients whose insulin dose was managed by GM2 demonstrated reduced glycemic variability as measured by both SD (13.7 vs 21.6 mg/dL, P meter error (bias) was associated with decreased glycemic variability and increased percentage of values in target glucose range for patients placed on intravenous insulin therapy following cardiovascular surgery. © 2015 Diabetes Technology Society.

  17. Berkson error adjustment and other exposure surrogates in occupational case-control studies, with application to the Canadian INTEROCC study.

    Science.gov (United States)

    Oraby, Tamer; Sivaganesan, Siva; Bowman, Joseph D; Kincl, Laurel; Richardson, Lesley; McBride, Mary; Siemiatycki, Jack; Cardis, Elisabeth; Krewski, Daniel

    2018-05-01

    Many epidemiological studies assessing the relationship between exposure and disease are carried out without data on individual exposures. When this barrier is encountered in occupational studies, the subject exposures are often evaluated with a job-exposure matrix (JEM), which consists of mean exposure for occupational categories measured on a comparable group of workers. One of the objectives of the seven-country case-control study of occupational exposure and brain cancer risk, INTEROCC, was to investigate the relationship of occupational exposure to electromagnetic fields (EMF) in different frequency ranges and brain cancer risk. In this paper, we use the Canadian data from INTEROCC to estimate the odds of developing brain tumours due to occupational exposure to EMF. The first step was to find the best EMF exposure surrogate among the arithmetic mean, the geometric mean, and the mean of log-normal exposure distribution for each occupation in the JEM, in comparison to Berkson error adjustments via numerical approximation of the likelihood function. Contrary to previous studies of Berkson errors in JEMs, we found that the geometric mean was the best exposure surrogate. This analysis provided no evidence that cumulative lifetime exposure to extremely low frequency magnetic fields increases brain cancer risk, a finding consistent with other recent epidemiological studies.

  18. The economics of malaria control and elimination: a systematic review.

    Science.gov (United States)

    Shretta, Rima; Avanceña, Anton L V; Hatefi, Arian

    2016-12-12

    Declining donor funding and competing health priorities threaten the sustainability of malaria programmes. Elucidating the cost and benefits of continued investments in malaria could encourage sustained political and financial commitments. The evidence, although available, remains disparate. This paper reviews the existing literature on the economic and financial cost and return of malaria control, elimination and eradication. A review of articles that were published on or before September 2014 on the cost and benefits of malaria control and elimination was performed. Studies were classified based on their scope and were analysed according to two major categories: cost of malaria control and elimination to a health system, and cost-benefit studies. Only studies involving more than two control or elimination interventions were included. Outcomes of interest were total programmatic cost, cost per capita, and benefit-cost ratios (BCRs). All costs were converted to 2013 US$ for standardization. Of the 6425 articles identified, 54 studies were included in this review. Twenty-two were focused on elimination or eradication while 32 focused on intensive control. Forty-eight per cent of studies included in this review were published on or after 2000. Overall, the annual per capita cost of malaria control to a health system ranged from $0.11 to $39.06 (median: $2.21) while that for malaria elimination ranged from $0.18 to $27 (median: $3.00). BCRs of investing in malaria control and elimination ranged from 2.4 to over 145. Overall, investments needed for malaria control and elimination varied greatly amongst the various countries and contexts. In most cases, the cost of elimination was greater than the cost of control. At the same time, the benefits of investing in malaria greatly outweighed the costs. While the cost of elimination in most cases was greater than the cost of control, the benefits greatly outweighed the cost. Information from this review provides guidance to

  19. A Study on Large Display Panel Design for the Countermeasures against Team Errors within the Main Control Room of APR-1400

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sa Kil; Lee, Yong Hee [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    The personal aspect of human errors has been mainly overcome by virtue of the education and training. However, in the system aspect, the education and training system needs to be reconsidered for more effective reduction of human errors affected from various systems hazards. Traditionally the education and training systems are mainly not focused on team skills such as communication, situational awareness, and coordination, etc. but individual knowledge, skill, and attitude. However, the team factor is one of the crucial issues to reduce the human errors in most industries. In this study, we identify the emerging types of team errors, especially, in digitalized control room of nuclear power plants such as the APR-1400 main control room. Most works in nuclear industry are to be performed by a team of more than two persons. Even though the individual errors can be detected and recovered by the qualified others and/or the well trained team, it is rather seldom that the errors by team could be easily detected and properly recovered by the team itself. Note that the team is defined as two or more people who are appropriately interacting with each other, and the team is a dependent aggregate, which accomplishes a valuable goal. Team error is one of the typical organizational errors that may occur during performing operations in nuclear power plants. The large display panel is a representative feature of digitalized control room. As a group-view display, the large display panel provides plant overview to the operators. However, in terms of team performance and team errors, the large display panel is on a discussion board still because the large display panel was designed just a concept of passive display. In this study, we will propose revised large display panel which is integrated with several alternative interfaces against feasible team errors.

  20. A Study on Large Display Panel Design for the Countermeasures against Team Errors within the Main Control Room of APR-1400

    International Nuclear Information System (INIS)

    Kim, Sa Kil; Lee, Yong Hee

    2015-01-01

    The personal aspect of human errors has been mainly overcome by virtue of the education and training. However, in the system aspect, the education and training system needs to be reconsidered for more effective reduction of human errors affected from various systems hazards. Traditionally the education and training systems are mainly not focused on team skills such as communication, situational awareness, and coordination, etc. but individual knowledge, skill, and attitude. However, the team factor is one of the crucial issues to reduce the human errors in most industries. In this study, we identify the emerging types of team errors, especially, in digitalized control room of nuclear power plants such as the APR-1400 main control room. Most works in nuclear industry are to be performed by a team of more than two persons. Even though the individual errors can be detected and recovered by the qualified others and/or the well trained team, it is rather seldom that the errors by team could be easily detected and properly recovered by the team itself. Note that the team is defined as two or more people who are appropriately interacting with each other, and the team is a dependent aggregate, which accomplishes a valuable goal. Team error is one of the typical organizational errors that may occur during performing operations in nuclear power plants. The large display panel is a representative feature of digitalized control room. As a group-view display, the large display panel provides plant overview to the operators. However, in terms of team performance and team errors, the large display panel is on a discussion board still because the large display panel was designed just a concept of passive display. In this study, we will propose revised large display panel which is integrated with several alternative interfaces against feasible team errors

  1. Establishment of Systematic Design Control/Configuration Management Processes to Enhance Engineering Capability

    International Nuclear Information System (INIS)

    Inagaki, T.; Hamada, T.; Ihara, T.

    2016-01-01

    Full text: After the accident of Fukushima Daiichi Nuclear Power Plant in 2011, Tokyo Electric Power Company (TEPCO) launched various measures to enhance plant safety and safety culture of its employees. One of the important aspects of these measures is to enhance engineering capability and TEPCO is conducting actions to establish systematic design control and configuration management processes as an important foundation of such engineering capability. This paper describes how TEPCO is establishing systematic configuration management processes from three aspects, i.e., design requirement and bases management, facility configuration control, and configuration change management. It also provides brief information of the IT systems that are being introduced and will support the systematic design control and configuration management processes. (author

  2. Improved read disturb and write error rates in voltage-control spintronics memory (VoCSM) by controlling energy barrier height

    Science.gov (United States)

    Inokuchi, T.; Yoda, H.; Kato, Y.; Shimizu, M.; Shirotori, S.; Shimomura, N.; Koi, K.; Kamiguchi, Y.; Sugiyama, H.; Oikawa, S.; Ikegami, K.; Ishikawa, M.; Altansargai, B.; Tiwari, A.; Ohsawa, Y.; Saito, Y.; Kurobe, A.

    2017-06-01

    A hybrid writing scheme that combines the spin Hall effect and voltage-controlled magnetic-anisotropy effect is investigated in Ta/CoFeB/MgO/CoFeB/Ru/CoFe/IrMn junctions. The write current and control voltage are applied to Ta and CoFeB/MgO/CoFeB junctions, respectively. The critical current density required for switching the magnetization in CoFeB was modulated 3.6-fold by changing the control voltage from -1.0 V to +1.0 V. This modulation of the write current density is explained by the change in the surface anisotropy of the free layer from 1.7 mJ/m2 to 1.6 mJ/m2, which is caused by the electric field applied to the junction. The read disturb rate and write error rate, which are important performance parameters for memory applications, are drastically improved, and no error was detected in 5 × 108 cycles by controlling read and write sequences.

  3. A systematic methodology for controller tuning in wastewater treatment plants

    DEFF Research Database (Denmark)

    Mauricio Iglesias, Miguel; Jørgensen, S.B.; Sin, G.

    2012-01-01

    Wastewater treatment plants are typically subject to continuous disturbances caused by influent variations which exhibits diurnal patterns as well as stochastic changes due to rain and storm water events. In order to achieve an efficient operation, the control system of the plant should be able t...

  4. Systematic errors in digital volume correlation due to the self-heating effect of a laboratory x-ray CT scanner

    International Nuclear Information System (INIS)

    Wang, B; Pan, B; Tao, R; Lubineau, G

    2017-01-01

    The use of digital volume correlation (DVC) in combination with a laboratory x-ray computed tomography (CT) for full-field internal 3D deformation measurement of opaque materials has flourished in recent years. During x-ray tomographic imaging, the heat generated by the x-ray tube changes the imaging geometry of x-ray scanner, and further introduces noticeable errors in DVC measurements. In this work, to provide practical guidance high-accuracy DVC measurement, the errors in displacements and strains measured by DVC due to the self-heating for effect of a commercially available x-ray scanner were experimentally investigated. The errors were characterized by performing simple rescan tests with different scan durations. The results indicate that the maximum strain errors associated with the self-heating of the x-ray scanner exceed 400 µε . Possible approaches for minimizing or correcting these displacement and strain errors are discussed. Finally, a series of translation and uniaxial compression tests were performed, in which strain errors were detected and then removed using pre-established artificial dilatational strain-time curve. Experimental results demonstrate the efficacy and accuracy of the proposed strain error correction approach. (paper)

  5. Systematic errors in digital volume correlation due to the self-heating effect of a laboratory x-ray CT scanner

    KAUST Repository

    Wang, B

    2017-02-15

    The use of digital volume correlation (DVC) in combination with a laboratory x-ray computed tomography (CT) for full-field internal 3D deformation measurement of opaque materials has flourished in recent years. During x-ray tomographic imaging, the heat generated by the x-ray tube changes the imaging geometry of x-ray scanner, and further introduces noticeable errors in DVC measurements. In this work, to provide practical guidance high-accuracy DVC measurement, the errors in displacements and strains measured by DVC due to the self-heating for effect of a commercially available x-ray scanner were experimentally investigated. The errors were characterized by performing simple rescan tests with different scan durations. The results indicate that the maximum strain errors associated with the self-heating of the x-ray scanner exceed 400 µε. Possible approaches for minimizing or correcting these displacement and strain errors are discussed. Finally, a series of translation and uniaxial compression tests were performed, in which strain errors were detected and then removed using pre-established artificial dilatational strain-time curve. Experimental results demonstrate the efficacy and accuracy of the proposed strain error correction approach.

  6. Operator errors

    International Nuclear Information System (INIS)

    Knuefer; Lindauer

    1980-01-01

    Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)

  7. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  8. Strategies for tobacco control in India: a systematic review.

    Directory of Open Access Journals (Sweden)

    Ailsa J McKay

    Full Text Available Tobacco control needs in India are large and complex. Evaluation of outcomes to date has been limited.To review the extent of tobacco control measures, and the outcomes of associated trialled interventions, in India.Information was identified via database searches, journal hand-searches, reference and citation searching, and contact with experts. Studies of any population resident in India were included. Studies where outcomes were not yet available, not directly related to tobacco use, or not specific to India, were excluded. Pre-tested proformas were used for data extraction and quality assessment. Studies with reliability concerns were excluded from some aspects of analysis. The Framework Convention on Tobacco Control (FCTC was use as a framework for synthesis. Heterogeneity limited meta-analysis options. Synthesis was therefore predominantly narrative.Additional to the Global Tobacco Surveillance System data, 80 studies were identified, 45 without reliability concerns. Most related to education (FCTC Article 12 and tobacco-use cessation (Article 14. They indicated widespread understanding of tobacco-related harm, but less knowledge about specific consequences of use. Healthcare professionals reported low confidence in cessation assistance, in keeping with low levels of training. Training for schoolteachers also appeared suboptimal. Educational and cessation assistance interventions demonstrated positive impact on tobacco use. Studies relating to smoke-free policies (Article 8, tobacco advertisements and availability (Articles 13 and 16 indicated increasingly widespread smoke-free policies, but persistence of high levels of SHS exposure, tobacco promotions and availability-including to minors. Data relating to taxation/pricing and packaging (Articles 6 and 11 were limited. We did not identify any studies of product regulation, alternative employment strategies, or illicit trade (Articles 9, 10, 15 and 17.Tobacco-use outcomes could be improved

  9. Postural control in blind individuals: A systematic review.

    Science.gov (United States)

    Parreira, Rodolfo Borges; Grecco, Luanda André Collange; Oliveira, Claudia Santos

    2017-09-01

    Postural control (PC) requires the interaction of the three sensory systems for a good maintenance of the balance, and in blind people, lack of visual input can harm your PC. Thus the objective is to perform a literature review concerning role of sight in the maintenance of PC and the adaptation of brain structures when vision is absent. Studies were searched from Pubmed, and EMBASE that included individuals with congenital blindness. Articles studying person with acquired blindness or low vision was excluded from this review. 26 out of 322 articles were selected for review, and we found that 1) blind individuals exhibit PC deficits and that is compensated by the intensification of the remaining systems; 2) Neuroplastic adaptation occurs throughout the entire cerebral cortex; and 3) Sensorimotor stimulation and transcranial direct current stimulation seem to be a rehabilitation strategy. According to this review, the findings suggest that improved remaining sensations in the presence of adaptations and neuroplasticity, does not translate into better postural control performance. Regarding rehabilitation strategies, more studies are needed to show which therapeutic modality best contributes to postural control. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Detecting errors in micro and trace analysis by using statistics

    DEFF Research Database (Denmark)

    Heydorn, K.

    1993-01-01

    By assigning a standard deviation to each step in an analytical method it is possible to predict the standard deviation of each analytical result obtained by this method. If the actual variability of replicate analytical results agrees with the expected, the analytical method is said...... to be in statistical control. Significant deviations between analytical results from different laboratories reveal the presence of systematic errors, and agreement between different laboratories indicate the absence of systematic errors. This statistical approach, referred to as the analysis of precision, was applied...

  11. Randomised controlled trials of homeopathy in humans: characterising the research journal literature for systematic review.

    Science.gov (United States)

    Mathie, Robert T; Hacke, Daniela; Clausen, Jürgen; Nicolai, Ton; Riley, David S; Fisher, Peter

    2013-01-01

    A new programme of systematic reviews of randomised controlled trials (RCTs) in homeopathy will distinguish important attributes of RCT records, including: placebo controlled versus other-than-placebo (OTP) controlled; individualised versus non-individualised homeopathy; peer-reviewed (PR) versus non peer-reviewed (NPR) sources. (a) To outline the methods used to search and categorise the RCT literature; (b) to report details of the records retrieved; (c) to compare our retrieved records with those reported in two previous systematic reviews (Linde et al., 1997; Shang et al., 2005). Ten major electronic databases were searched for records published up to the end of 2011. A record was accepted for subsequent systematic review if it was a substantive report of a clinical trial of homeopathic treatment or prophylaxis in humans, randomised and controlled, and published in a PR or NPR journal. 489 records were potentially eligible: 226 were rejected as non-journal, minor or repeat publications, or lacking randomisation and/or controls and/or a 'homeopathic' intervention; 263 (164 PR, 99 NPR) were acceptable for systematic review. The 263 accepted records comprised 217 (137 PR, 80 NPR) placebo-controlled RCTs, of which 121 were included by, 66 were published after, and 30 were potentially eligible for, but not listed by, Linde or Shang. The 137 PR records of placebo-controlled RCTs comprise 41 on individualised homeopathy and 96 on non-individualised homeopathy. Our findings clarify the RCT literature in homeopathy. The 263 accepted journal papers will be the basis for our forthcoming programme of systematic reviews. Copyright © 2012 The Faculty of Homeopathy. Published by Elsevier Ltd. All rights reserved.

  12. Systematic review and gamma radiosensitivity of medicinal plants: development of protocol for quality control

    International Nuclear Information System (INIS)

    Oliveira, Ralph Santos

    2006-01-01

    The present study discusses the contribution of the adoption of more rigorous and objective criteria to the selection and analysis of information sources, leading to more scientific rigour when registering phytotherapic drugs. To this end, it is herein proposed the adoption of a previously tested and acknowledged methodology, namely the Systematic Revision, as a standard for phytotherapic drug analyses. In order to show differences brought about by the Systematic Revision during the registration procedures of phytotherapic drugs, the case of the Maytenus ilicifolia (known popularly in Brazil as 'espinheira-santa') is presented. As it is well known, the use of ionizing radiation is expanding, especially in medicine and pharmacy. Therefore, gamma radiation was applied to the microbiological quality control of phytotherapic matrices. Results indicated a positive contribution of Systematic Revision to the registration procedures of phytotherapic drugs, as well as the advantages of using gamma radiation to the microbiological quality control of phytotherapic matrices. (author)

  13. Optical 'dampening' of the refractive error to axial length ratio: implications for outcome measures in myopia control studies.

    Science.gov (United States)

    Cruickshank, Fiona E; Logan, Nicola S

    2018-05-01

    To gauge the extent to which differences in the refractive error axial length relationship predicted by geometrical optics are observed in actual refractive/biometric data. This study is a retrospective analysis of existing data. Right eye refractive error [RX] and axial length [AXL] data were collected on 343 6-to-7-year-old children [mean 7.18 years (S.D. 0.35)], 294 12-to-13-year-old children [mean 13.12 years (S.D. 0.32)] and 123 young adults aged 18-to-25-years [mean 20.56 years (S.D. 1.91)]. Distance RX was measured with the Shin-Nippon NVision-K 5001 infrared open-field autorefractor. Child participants were cyclopleged prior to data collection (1% Cyclopentolate Hydrochloride). Myopia was defined as a mean spherical equivalent [MSE] ≤-0.50 D. Axial length was measured using the Zeiss IOLMaster 500. Optical modelling was based on ray tracing and manipulation of parameters of a Gullstrand reduced model eye. There was a myopic shift in mean MSE with age (6-7 years +0.87 D, 12-13 years -0.06 D and 18-25 years -1.41 D), associated with an increase in mean AXL (6-7 years 22.70 mm, 12-13 years 23.49 mm and 18-25 years 23.98 mm). There was a significant negative correlation between MSE and AXL for all age groups (all p theory predicts that there will be a reduction in the RX: AXL ratio with longer eyes. The participant data although adhering to this theory show a reduced effect, with eyes with longer axial lengths having a lower refractive error to axial length ratio than predicted by model eye calculations. We propose that in myopia control intervention studies when comparing efficacy, consideration should be given to the dampening effect seen with a longer eye. © 2018 The Authors. Ophthalmic and Physiological Optics published by John Wiley & Sons Ltd on behalf of College of Optometrists.

  14. Creatine supplementation and glycemic control: a systematic review.

    Science.gov (United States)

    Pinto, Camila Lemos; Botelho, Patrícia Borges; Pimentel, Gustavo Duarte; Campos-Ferraz, Patrícia Lopes; Mota, João Felipe

    2016-09-01

    The focus of this review is the effects of creatine supplementation with or without exercise on glucose metabolism. A comprehensive examination of the past 16 years of study within the field provided a distillation of key data. Both in animal and human studies, creatine supplementation together with exercise training demonstrated greater beneficial effects on glucose metabolism; creatine supplementation itself demonstrated positive results in only a few of the studies. In the animal studies, the effects of creatine supplementation on glucose metabolism were even more distinct, and caution is needed in extrapolating these data to different species, especially to humans. Regarding human studies, considering the samples characteristics, the findings cannot be extrapolated to patients who have poorer glycemic control, are older, are on a different pharmacological treatment (e.g., exogenous insulin therapy) or are physically inactive. Thus, creatine supplementation is a possible nutritional therapy adjuvant with hypoglycemic effects, particularly when used in conjunction with exercise.

  15. Framework for the systematic assessment of a material control and accounting system

    International Nuclear Information System (INIS)

    Schechter, R.S.; Sacks, I.J.

    1981-01-01

    Procedures are described for the systematic assessment of a Material Control and Accounting (MC and A) system, in terms of compliance to the proposed MC and A Upgrade Rule. The applicability of these assessment procedures to specific Rule provisions is discussed. Special attention is given to the statistical performance of individual subsystems, and their vulnerability to compromise by insider collusion

  16. Systematic design of an intra-cycle fueling control system for advanced diesel combustion concepts

    NARCIS (Netherlands)

    Kefalidis, L.

    2017-01-01

    This technical report presents a systematic approach for the design and development of an intra-cycle fueling control system for diesel combustion concepts. A high level system was developed and implemented on an experimental engine setup. Implementation and experimental validation are performed for

  17. Treatment of Test Anxiety by Cue-Controlled Relaxation and Systematic Desensitization

    Science.gov (United States)

    Russell, Richard K.; And Others

    1976-01-01

    Test-anxious subjects (N=19) participated in an outcome study comparing systematic desensitization, cue-controlled relaxation, and no treatment. The treatment groups demonstrated significant improvement on the self-report measures of test and state anxiety but not on the behavioral indices. The potential advantages of this technique over…

  18. Evaluating Data Abstraction Assistant, a novel software application for data abstraction during systematic reviews: protocol for a randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Ian J. Saldanha

    2016-11-01

    Full Text Available Abstract Background Data abstraction, a critical systematic review step, is time-consuming and prone to errors. Current standards for approaches to data abstraction rest on a weak evidence base. We developed the Data Abstraction Assistant (DAA, a novel software application designed to facilitate the abstraction process by allowing users to (1 view study article PDFs juxtaposed to electronic data abstraction forms linked to a data abstraction system, (2 highlight (or “pin” the location of the text in the PDF, and (3 copy relevant text from the PDF into the form. We describe the design of a randomized controlled trial (RCT that compares the relative effectiveness of (A DAA-facilitated single abstraction plus verification by a second person, (B traditional (non-DAA-facilitated single abstraction plus verification by a second person, and (C traditional independent dual abstraction plus adjudication to ascertain the accuracy and efficiency of abstraction. Methods This is an online, randomized, three-arm, crossover trial. We will enroll 24 pairs of abstractors (i.e., sample size is 48 participants, each pair comprising one less and one more experienced abstractor. Pairs will be randomized to abstract data from six articles, two under each of the three approaches. Abstractors will complete pre-tested data abstraction forms using the Systematic Review Data Repository (SRDR, an online data abstraction system. The primary outcomes are (1 proportion of data items abstracted that constitute an error (compared with an answer key and (2 total time taken to complete abstraction (by two abstractors in the pair, including verification and/or adjudication. Discussion The DAA trial uses a practical design to test a novel software application as a tool to help improve the accuracy and efficiency of the data abstraction process during systematic reviews. Findings from the DAA trial will provide much-needed evidence to strengthen current recommendations for data

  19. Learning from Errors

    Directory of Open Access Journals (Sweden)

    MA. Lendita Kryeziu

    2015-06-01

    Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.

  20. The Affect of Varying Arousal Methods Upon Vigilance and Error Detection in an Automated Command and Control Environment

    National Research Council Canada - National Science Library

    Langhals, Brent

    2001-01-01

    .... The study suggests that by continuously applying arousal stimuli, subjects would retain initially high vigilance levels thereby avoiding the vigilance decrement phenomenon and improving error detection...

  1. A systematic method of smooth switching LPV controllers design for a morphing aircraft

    Directory of Open Access Journals (Sweden)

    Jiang Weilai

    2015-12-01

    Full Text Available This paper is concerned with a systematic method of smooth switching linear parameter-varying (LPV controllers design for a morphing aircraft with a variable wing sweep angle. The morphing aircraft is modeled as an LPV system, whose scheduling parameter is the variation rate of the wing sweep angle. By dividing the scheduling parameter set into subsets with overlaps, output feedback controllers which consider smooth switching are designed and the controllers in overlapped subsets are interpolated from two adjacent subsets. A switching law without constraint on the average dwell time is obtained which makes the conclusion less conservative. Furthermore, a systematic algorithm is developed to improve the efficiency of the controllers design process. The parameter set is divided into the fewest subsets on the premise that the closed-loop system has a desired performance. Simulation results demonstrate the effectiveness of this approach.

  2. Randomised controlled trials of veterinary homeopathy: characterising the peer-reviewed research literature for systematic review.

    Science.gov (United States)

    Mathie, Robert T; Hacke, Daniela; Clausen, Jürgen

    2012-10-01

    Systematic review of the research evidence in veterinary homeopathy has never previously been carried out. This paper presents the search methods, together with categorised lists of retrieved records, that enable us to identify the literature that is acceptable for future systematic review of randomised controlled trials (RCTs) in veterinary homeopathy. All randomised and controlled trials of homeopathic intervention (prophylaxis and/or treatment of disease, in any species except man) were appraised according to pre-specified criteria. The following databases were systematically searched from their inception up to and including March 2011: AMED; Carstens-Stiftung Homeopathic Veterinary Clinical Research (HomVetCR) database; CINAHL; Cochrane Central Register of Controlled Trials; Embase; Hom-Inform; LILACS; PubMed; Science Citation Index; Scopus. One hundred and fifty records were retrieved; 38 satisfied the acceptance criteria (substantive report of a clinical treatment or prophylaxis trial in veterinary homeopathic medicine randomised and controlled and published in a peer-reviewed journal), and were thus eligible for future planned systematic review. Approximately half of the rejected records were theses. Seven species and 27 different species-specific medical conditions were represented in the 38 papers. Similar numbers of papers reported trials of treatment and prophylaxis (n=21 and n=17 respectively) and were controlled against placebo or other than placebo (n=18, n=20 respectively). Most research focused on non-individualised homeopathy (n=35 papers) compared with individualised homeopathy (n=3). The results provide a complete and clarified view of the RCT literature in veterinary homeopathy. We will systematically review the 38 substantive peer-reviewed journal articles under the main headings: treatment trials; prophylaxis trials. Copyright © 2012 The Faculty of Homeopathy. Published by Elsevier Ltd. All rights reserved.

  3. The Impact of Water, Sanitation and Hygiene Interventions to Control Cholera: A Systematic Review.

    OpenAIRE

    Taylor, DL; Kahawita, TM; Cairncross, S; Ensink, JH

    2015-01-01

    Background and Methods Cholera remains a significant threat to global public health with an estimated 100,000 deaths per year. Water, sanitation and hygiene (WASH) interventions are frequently employed to control outbreaks though evidence regarding their effectiveness is often missing. This paper presents a systematic literature review investigating the function, use and impact of WASH interventions implemented to control cholera. Results The review yielded eighteen studies and of the five st...

  4. Effects of physical exercise interventions in frail older adults: a systematic review of randomized controlled trials

    OpenAIRE

    de Labra, Carmen; Guimaraes-Pinheiro, Christyanne; Maseda, Ana; Lorenzo, Trinidad; Mill?n-Calenti, Jos? C.

    2015-01-01

    Background Low physical activity has been shown to be one of the most common components of frailty, and interventions have been considered to prevent or reverse this syndrome. The purpose of this systematic review of randomized, controlled trials is to examine the exercise interventions to manage frailty in older people. Methods The PubMed, Web of Science, and Cochrane Central Register of Controlled Trials databases were searched using specific keywords and Medical Subject Headings for random...

  5. Science, practice, and human errors in controlling Clostridium botulinum in heat-preserved food in hermetic containers.

    Science.gov (United States)

    Pflug, Irving J

    2010-05-01

    The incidence of botulism in canned food in the last century is reviewed along with the background science; a few conclusions are reached based on analysis of published data. There are two primary aspects to botulism control: the design of an adequate process and the delivery of the adequate process to containers of food. The probability that the designed process will not be adequate to control Clostridium botulinum is very small, probably less than 1.0 x 10(-6), based on containers of food, whereas the failure of the operator of the processing equipment to deliver the specified process to containers of food may be of the order of 1 in 40, to 1 in 100, based on processing units (retort loads). In the commercial food canning industry, failure to deliver the process will probably be of the order of 1.0 x 10(-4) to 1.0 x 10(-6) when U.S. Food and Drug Administration (FDA) regulations are followed. Botulism incidents have occurred in food canning plants that have not followed the FDA regulations. It is possible but very rare to have botulism result from postprocessing contamination. It may thus be concluded that botulism incidents in canned food are primarily the result of human failure in the delivery of the designed or specified process to containers of food that, in turn, result in the survival, outgrowth, and toxin production of C. botulinum spores. Therefore, efforts in C. botulinum control should be concentrated on reducing human errors in the delivery of the specified process to containers of food.

  6. The effect of a clinical pharmacist-led training programme on intravenous medication errors : a controlled before and after study

    NARCIS (Netherlands)

    Nguyen, Huong; Pham, Hong-Tham; Vo, Dang-Khoa; Nguyen, Tuan-Dung; van den Heuvel, Edwin R.; Haaijer-Ruskamp, Flora M.; Taxis, Katja

    Background Little is known about interventions to reduce intravenous medication administration errors in hospitals, especially in low-and middle-income countries. Objective To assess the effect of a clinical pharmacist-led training programme on clinically relevant errors during intravenous

  7. Biofunctional Understanding and Conceptual Control: Searching for Systematic Consensus in Systemic Cohesion

    Science.gov (United States)

    Iran-Nejad, Asghar; Bordbar, Fareed

    2017-01-01

    For first generation scientists after the cognitive revolution, knowers were in active control over all (stages of) information processing. Then, following a decade of transition shaped by intense controversy, embodied cognition emerged and suggested sources of control other than those implied by metaphysical information processing. With a thematic focus on embodiment science and an eye toward systematic consensus in systemic cohesion, the present study explores the roles of biofunctional and conceptual control processes in the wholetheme spiral of biofunctional understanding (see Iran-Nejad and Irannejad, 2017b, Figure 1). According to this spiral, each of the two kinds of understanding has its own unique set of knower control processes. For conceptual understanding (CU), knowers have deliberate attention-allocation control over their first-person “knowthat” and “knowhow” content combined as mutually coherent corequisites. For biofunctional understanding (BU), knowers have attention-allocation control only over their knowthat content but knowhow control content is ordinarily conspicuously absent. To test the hypothesis of differences in the manner of control between CU and BU, participants in two experiments read identical-format statements for internal consistency, as response time was recorded. The results of Experiment 1 supported the hypothesis of differences in the manner of control between the two types of control processes; and Experiment 2 confirmed the results of Experiment 1. These findings are discussed in terms of the predicted differences between BU and CU control processes, their roles in regulating the physically unobservable flow of systemic cohesion in the wholetheme spiral, and a proposal for systematic consensus in systemic cohesion to serve as the second guiding principle in biofunctional embodiment science next to physical science’s first guiding principle of systematic observation. PMID:29114235

  8. Biofunctional Understanding and Conceptual Control: Searching for Systematic Consensus in Systemic Cohesion

    Directory of Open Access Journals (Sweden)

    Asghar Iran-Nejad

    2017-10-01

    Full Text Available For first generation scientists after the cognitive revolution, knowers were in active control over all (stages of information processing. Then, following a decade of transition shaped by intense controversy, embodied cognition emerged and suggested sources of control other than those implied by metaphysical information processing. With a thematic focus on embodiment science and an eye toward systematic consensus in systemic cohesion, the present study explores the roles of biofunctional and conceptual control processes in the wholetheme spiral of biofunctional understanding (see Iran-Nejad and Irannejad, 2017b, Figure 1. According to this spiral, each of the two kinds of understanding has its own unique set of knower control processes. For conceptual understanding (CU, knowers have deliberate attention-allocation control over their first-person “knowthat” and “knowhow” content combined as mutually coherent corequisites. For biofunctional understanding (BU, knowers have attention-allocation control only over their knowthat content but knowhow control content is ordinarily conspicuously absent. To test the hypothesis of differences in the manner of control between CU and BU, participants in two experiments read identical-format statements for internal consistency, as response time was recorded. The results of Experiment 1 supported the hypothesis of differences in the manner of control between the two types of control processes; and Experiment 2 confirmed the results of Experiment 1. These findings are discussed in terms of the predicted differences between BU and CU control processes, their roles in regulating the physically unobservable flow of systemic cohesion in the wholetheme spiral, and a proposal for systematic consensus in systemic cohesion to serve as the second guiding principle in biofunctional embodiment science next to physical science’s first guiding principle of systematic observation.

  9. Biofunctional Understanding and Conceptual Control: Searching for Systematic Consensus in Systemic Cohesion.

    Science.gov (United States)

    Iran-Nejad, Asghar; Bordbar, Fareed

    2017-01-01

    For first generation scientists after the cognitive revolution, knowers were in active control over all (stages of) information processing. Then, following a decade of transition shaped by intense controversy, embodied cognition emerged and suggested sources of control other than those implied by metaphysical information processing. With a thematic focus on embodiment science and an eye toward systematic consensus in systemic cohesion, the present study explores the roles of biofunctional and conceptual control processes in the wholetheme spiral of biofunctional understanding (see Iran-Nejad and Irannejad, 2017b, Figure 1). According to this spiral, each of the two kinds of understanding has its own unique set of knower control processes. For conceptual understanding (CU), knowers have deliberate attention-allocation control over their first-person "knowthat" and "knowhow" content combined as mutually coherent corequisites. For biofunctional understanding (BU), knowers have attention-allocation control only over their knowthat content but knowhow control content is ordinarily conspicuously absent. To test the hypothesis of differences in the manner of control between CU and BU, participants in two experiments read identical-format statements for internal consistency, as response time was recorded. The results of Experiment 1 supported the hypothesis of differences in the manner of control between the two types of control processes; and Experiment 2 confirmed the results of Experiment 1. These findings are discussed in terms of the predicted differences between BU and CU control processes, their roles in regulating the physically unobservable flow of systemic cohesion in the wholetheme spiral, and a proposal for systematic consensus in systemic cohesion to serve as the second guiding principle in biofunctional embodiment science next to physical science's first guiding principle of systematic observation.

  10. Quantification and handling of sampling errors in instrumental measurements: a case study

    DEFF Research Database (Denmark)

    Andersen, Charlotte Møller; Bro, R.

    2004-01-01

    in certain situations, the effect of systematic errors is also considerable. The relevant errors contributing to the prediction error are: error in instrumental measurements (x-error), error in reference measurements (y-error), error in the estimated calibration model (regression coefficient error) and model...

  11. Square-Wave Voltage Injection Algorithm for PMSM Position Sensorless Control With High Robustness to Voltage Errors

    DEFF Research Database (Denmark)

    Ni, Ronggang; Xu, Dianguo; Blaabjerg, Frede

    2017-01-01

    relationship with the magnetic field distortion. Position estimation errors caused by higher order harmonic inductances and voltage harmonics generated by the SVPWM are also discussed. Both simulations and experiments are carried out based on a commercial PMSM to verify the superiority of the proposed method......Rotor position estimated with high-frequency (HF) voltage injection methods can be distorted by voltage errors due to inverter nonlinearities, motor resistance, and rotational voltage drops, etc. This paper proposes an improved HF square-wave voltage injection algorithm, which is robust to voltage...... errors without any compensations meanwhile has less fluctuation in the position estimation error. The average position estimation error is investigated based on the analysis of phase harmonic inductances, and deduced in the form of the phase shift of the second-order harmonic inductances to derive its...

  12. Einstein's error

    International Nuclear Information System (INIS)

    Winterflood, A.H.

    1980-01-01

    In discussing Einstein's Special Relativity theory it is claimed that it violates the principle of relativity itself and that an anomalous sign in the mathematics is found in the factor which transforms one inertial observer's measurements into those of another inertial observer. The apparent source of this error is discussed. Having corrected the error a new theory, called Observational Kinematics, is introduced to replace Einstein's Special Relativity. (U.K.)

  13. Building a World-Class Safety Culture: The National Ignition Facility and the Control of Human and Organizational Error

    International Nuclear Information System (INIS)

    Bennett, C T; Stalnaker, G

    2002-01-01

    Accidents in complex systems send us signals. They may be harbingers of a catastrophe. Some even argue that a ''normal'' consequence of operations in a complex organization may not only be the goods it produces, but also accidents and--inevitably--catastrophes. We would like to tell you the story of a large, complex organization, whose history questions the argument ''that accidents just happen.'' Starting from a less than enviable safety record, the National Ignition Facility (NIF) has accumulated over 2.5 million safe hours. The story of NIF is still unfolding. The facility is still being constructed and commissioned. But the steps NIF has taken in achieving its safety record provide a principled blueprint that may be of value to others. Describing that principled blueprint is the purpose of this paper. The first part of this paper is a case study of NIF and its effort to achieve a world-class safety record. This case study will include a description of (1) NIF's complex systems, (2) NIF's early safety history, (3) factors that may have initiated its safety culture change, and (4) the evolution of its safety blueprint. In the last part of the paper, we will compare NIF's safety culture to what safety industry experts, psychologists, and sociologists say about how to shape a culture and control organizational error

  14. Effect of MLC leaf position, collimator rotation angle, and gantry rotation angle errors on intensity-modulated radiotherapy plans for nasopharyngeal carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Bai, Sen; Li, Guangjun; Wang, Maojie; Jiang, Qinfeng; Zhang, Yingjie [State Key Laboratory of Biotherapy and Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan (China); Wei, Yuquan, E-mail: yuquawei@vip.sina.com [State Key Laboratory of Biotherapy and Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan (China)

    2013-07-01

    The purpose of this study was to investigate the effect of multileaf collimator (MLC) leaf position, collimator rotation angle, and accelerator gantry rotation angle errors on intensity-modulated radiotherapy plans for nasopharyngeal carcinoma. To compare dosimetric differences between the simulating plans and the clinical plans with evaluation parameters, 6 patients with nasopharyngeal carcinoma were selected for simulation of systematic and random MLC leaf position errors, collimator rotation angle errors, and accelerator gantry rotation angle errors. There was a high sensitivity to dose distribution for systematic MLC leaf position errors in response to field size. When the systematic MLC position errors were 0.5, 1, and 2 mm, respectively, the maximum values of the mean dose deviation, observed in parotid glands, were 4.63%, 8.69%, and 18.32%, respectively. The dosimetric effect was comparatively small for systematic MLC shift errors. For random MLC errors up to 2 mm and collimator and gantry rotation angle errors up to 0.5°, the dosimetric effect was negligible. We suggest that quality control be regularly conducted for MLC leaves, so as to ensure that systematic MLC leaf position errors are within 0.5 mm. Because the dosimetric effect of 0.5° collimator and gantry rotation angle errors is negligible, it can be concluded that setting a proper threshold for allowed errors of collimator and gantry rotation angle may increase treatment efficacy and reduce treatment time.

  15. Single Versus Multiple Events Error Potential Detection in a BCI-Controlled Car Game With Continuous and Discrete Feedback.

    Science.gov (United States)

    Kreilinger, Alex; Hiebel, Hannah; Müller-Putz, Gernot R

    2016-03-01

    This work aimed to find and evaluate a new method for detecting errors in continuous brain-computer interface (BCI) applications. Instead of classifying errors on a single-trial basis, the new method was based on multiple events (MEs) analysis to increase the accuracy of error detection. In a BCI-driven car game, based on motor imagery (MI), discrete events were triggered whenever subjects collided with coins and/or barriers. Coins counted as correct events, whereas barriers were errors. This new method, termed ME method, combined and averaged the classification results of single events (SEs) and determined the correctness of MI trials, which consisted of event sequences instead of SEs. The benefit of this method was evaluated in an offline simulation. In an online experiment, the new method was used to detect erroneous MI trials. Such MI trials were discarded and could be repeated by the users. We found that, even with low SE error potential (ErrP) detection rates, feasible accuracies can be achieved when combining MEs to distinguish erroneous from correct MI trials. Online, all subjects reached higher scores with error detection than without, at the cost of longer times needed for completing the game. Findings suggest that ErrP detection may become a reliable tool for monitoring continuous states in BCI applications when combining MEs. This paper demonstrates a novel technique for detecting errors in online continuous BCI applications, which yields promising results even with low single-trial detection rates.

  16. Error-finding and error-correcting methods for the start-up of the SLC

    International Nuclear Information System (INIS)

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.; Selig, L.J.

    1987-02-01

    During the commissioning of an accelerator, storage ring, or beam transfer line, one of the important tasks of an accelertor physicist is to check the first-order optics of the beam line and to look for errors in the system. Conceptually, it is important to distinguish between techniques for finding the machine errors that are the cause of the problem and techniques for correcting the beam errors that are the result of the machine errors. In this paper we will limit our presentation to certain applications of these two methods for finding or correcting beam-focus errors and beam-kick errors that affect the profile and trajectory of the beam respectively. Many of these methods have been used successfully in the commissioning of SLC systems. In order not to waste expensive beam time we have developed and used a beam-line simulator to test the ideas that have not been tested experimentally. To save valuable physicist's time we have further automated the beam-kick error-finding procedures by adopting methods from the field of artificial intelligence to develop a prototype expert system. Our experience with this prototype has demonstrated the usefulness of expert systems in solving accelerator control problems. The expert system is able to find the same solutions as an expert physicist but in a more systematic fashion. The methods used in these procedures and some of the recent applications will be described in this paper

  17. A New Control Structure for Grid-Connected LCL PV Inverters with Zero Steady-State Error and Selective Harmonic Compensation

    DEFF Research Database (Denmark)

    Teodorescu, Remus; Blaabjerg, Frede; Borup, Uffe

    2004-01-01

    disturbance rejection capability leads to the need of grid feed-forward compensation. However the imperfect compensation action of the feed-forward control results in high harmonic distortion of the current and consequently non-compliance with international standards. In this paper a new control strategy...... aimed to mitigate these problems is proposed. Stationary-frame generalized integrators are used to control the fundamental current and to compensate the grid harmonics providing disturbance rejection capability without the need of feed-forward grid compensation. Moreover the use of a grid LCL......The PI current control of a single-phase inverter has well known drawbacks: steady-state magnitude and phase-error and limited disturbance rejection capability. When the current controlled inverter is connected to the grid, the phase error results in a power factor decrement and the limited...

  18. The influence of the analog-to-digital conversion error on the JT-60 plasma position/shape feedback control system

    International Nuclear Information System (INIS)

    Yoshida, Michiharu; Kurihara, Kenichi

    1995-12-01

    In the plasma feedback control system (PFCS) and the direct digital controller (DDC) for the poloidal field coil power supply in the JT-60 tokamak, it is necessary to observe signals of all the poloidal field coil currents. Each of the signals, originally measured by a single sensor, is distributed to the PFCS and DDC through different cable routes and different analog-to-digital converters from each other. This produces the conversion error to the amount of several bits. Consequently, proper voltage from feedback calculation cannot be applied to the coil, and hence the control performance is possibly supposed to deteriorate to a certain extent. This paper describes how this error makes an influence on the plasma horizontal position control and how to improve the deteriorated control performance. (author)

  19. Hypoxia in the St. Lawrence Estuary: How a Coding Error Led to the Belief that "Physics Controls Spatial Patterns".

    Directory of Open Access Journals (Sweden)

    Daniel Bourgault

    Full Text Available Two fundamental sign errors were found in a computer code used for studying the oxygen minimum zone (OMZ and hypoxia in the Estuary and Gulf of St. Lawrence. These errors invalidate the conclusions drawn from the model, and call into question a proposed mechanism for generating OMZ that challenges classical understanding. The study in question is being cited frequently, leading the discipline in the wrong direction.

  20. Measurement Error in Education and Growth Regressions

    NARCIS (Netherlands)

    Portela, M.; Teulings, C.N.; Alessie, R.

    The perpetual inventory method used for the construction of education data per country leads to systematic measurement error. This paper analyses the effect of this measurement error on GDP regressions. There is a systematic difference in the education level between census data and observations

  1. Measurement error in education and growth regressions

    NARCIS (Netherlands)

    Portela, Miguel; Teulings, Coen; Alessie, R.

    2004-01-01

    The perpetual inventory method used for the construction of education data per country leads to systematic measurement error. This paper analyses the effect of this measurement error on GDP regressions. There is a systematic difference in the education level between census data and observations

  2. Schizophrenia and weight management: a systematic review of interventions to control weight.

    Science.gov (United States)

    Faulkner, G; Soundy, A A; Lloyd, K

    2003-11-01

    Weight gain is a frequent side effect of antipsychotic medication which has serious implications for a patient's health and well being. This study systematically reviews the literature on the effectiveness of interventions designed to control weight gain in schizophrenia. A systematic search strategy was conducted of major databases in addition to citation searches. Study quality was rated. Sixteen studies met the inclusion criteria. Five of eight pharmacological intervention studies reported small reductions in weight (weight). All behavioural (including diet and/or exercise) interventions reported small reductions in, or maintenance of, weight. Weight loss may be difficult but it is not impossible. Given the inconsistent results, the widespread use of pharmacological interventions cannot be recommended. Both dietary and exercise counselling set within a behavioural modification programme is necessary for sustained weight control.

  3. Systematic review of randomized controlled trials in the treatment of dry eye disease in Sjogren syndrome

    OpenAIRE

    Shih, Kendrick Co; Lun, Christie Nicole; Jhanji, Vishal; Thong, Bernard Yu-Hor; Tong, Louis

    2017-01-01

    Abstract Primary Sjögren’s syndrome is an autoimmune disease characterized by dry eye and dry mouth. We systematically reviewed all the randomized controlled clinical trials published in the last 15 years that included ocular outcomes. We found 22 trials involving 9 topical, 10 oral, 2 intravenous and 1 subcutaneous modalities of treatment. Fluoromethalone eye drops over 8 weeks were more effective than topical cyclosporine in the treatment of dry eye symptoms and signs; similarly, indomethac...

  4. Components of effective randomized controlled trials of hydrotherapy programs for fibromyalgia syndrome: A systematic review

    Directory of Open Access Journals (Sweden)

    Luke Perraton

    2009-11-01

    Full Text Available Luke Perraton, Zuzana Machotka, Saravana KumarInternational Centre for Allied Health Evidence, University of South Australia, Adelaide, South Australia, AustraliaAim: Previous systematic reviews have found hydrotherapy to be an effective management strategy for fibromyalgia syndrome (FMS. The aim of this systematic review was to summarize the components of hydrotherapy programs used in randomized controlled trials.Method: A systematic review of randomized controlled trials was conducted. Only trials that have reported significant FMS-related outcomes were included. Data relating to the components of hydrotherapy programs (exercise type, duration, frequency and intensity, environmental factors, and service delivery were analyzed.Results: Eleven randomized controlled trials were included in this review. Overall, the quality of trials was good. Aerobic exercise featured in all 11 trials and the majority of hydrotherapy programs included either a strengthening or flexibility component. Great variability was noted in both the environmental components of hydrotherapy programs and service delivery.Conclusions: Aerobic exercise, warm up and cool-down periods and relaxation exercises are common features of hydrotherapy programs that report significant FMS-related outcomes. Treatment duration of 60 minutes, frequency of three sessions per week and an intensity equivalent to 60%–80% maximum heart rate were the most commonly reported exercise components. Exercise appears to be the most important component of an effective hydrotherapy program for FMS, particularly when considering mental health-related outcomes.Keywords: hydrotherapy, fibromyalgia syndrome, exercise, effective, components

  5. Components of effective randomized controlled trials of hydrotherapy programs for fibromyalgia syndrome: A systematic review.

    Science.gov (United States)

    Perraton, Luke; Machotka, Zuzana; Kumar, Saravana

    2009-11-30

    Previous systematic reviews have found hydrotherapy to be an effective management strategy for fibromyalgia syndrome (FMS). The aim of this systematic review was to summarize the components of hydrotherapy programs used in randomized controlled trials. A systematic review of randomized controlled trials was conducted. Only trials that have reported significant FMS-related outcomes were included. Data relating to the components of hydrotherapy programs (exercise type, duration, frequency and intensity, environmental factors, and service delivery) were analyzed. Eleven randomized controlled trials were included in this review. Overall, the quality of trials was good. Aerobic exercise featured in all 11 trials and the majority of hydrotherapy programs included either a strengthening or flexibility component. Great variability was noted in both the environmental components of hydrotherapy programs and service delivery. Aerobic exercise, warm up and cool-down periods and relaxation exercises are common features of hydrotherapy programs that report significant FMS-related outcomes. Treatment duration of 60 minutes, frequency of three sessions per week and an intensity equivalent to 60%-80% maximum heart rate were the most commonly reported exercise components. Exercise appears to be the most important component of an effective hydrotherapy program for FMS, particularly when considering mental health-related outcomes.

  6. Exercise improves glycaemic control in women diagnosed with gestational diabetes mellitus: a systematic review.

    Science.gov (United States)

    Harrison, Anne L; Shields, Nora; Taylor, Nicholas F; Frawley, Helena C

    2016-10-01

    Does exercise improve postprandial glycaemic control in women diagnosed with gestational diabetes mellitus? A systematic review of randomised trials. Pregnant women diagnosed with gestational diabetes mellitus. Exercise, performed more than once a week, sufficient to achieve an aerobic effect or changes in muscle metabolism. Postprandial blood glucose, fasting blood glucose, glycated haemoglobin, requirement for insulin, adverse events and adherence. This systematic review identified eight randomised, controlled trials involving 588 participants; seven trials (544 participants) had data that were suitable for meta-analysis. Five trials scored ≥ 6 on the PEDro scale, indicating a relatively low risk of bias. Meta-analysis showed that exercise, as an adjunct to standard care, significantly improved postprandial glycaemic control (MD -0.33mmol/L, 95% CI -0.49 to -0.17) and lowered fasting blood glucose (MD -0.31 mmol/L, 95% CI -0.56 to -0.05) when compared with standard care alone, with no increase in adverse events. Effects of similar magnitude were found for aerobic and resistance exercise programs, if performed at a moderate intensity or greater, for 20 to 30minutes, three to four times per week. Meta-analysis did not show that exercise significantly reduced the requirement for insulin. All studies reported that complications or other adverse events were either similar or reduced with exercise. Aerobic or resistance exercise, performed at a moderate intensity at least three times per week, safely helps to control postprandial blood glucose levels and other measures of glycaemic control in women diagnosed with gestational diabetes mellitus. PROSPERO CRD42015019106. [Harrison AL, Shields N, Taylor NF, Frawley HC (2016) Exercise improves glycaemic control in women diagnosed with gestational diabetes mellitus: a systematic review.Journal of Physiotherapy62: 188-196]. Copyright © 2016 Australian Physiotherapy Association. Published by Elsevier B.V. All rights

  7. Identification and Assessment of Human Error Due to design in damagingto the Sour Water Equipment and SRP Unit of Control Room in A Refinery Plant using SHERPA Technique

    Directory of Open Access Journals (Sweden)

    2013-02-01

    Conclusion: To prevent and control occurring each of the identified errors and to limit the consequences of them, appropriate counter measures such as proper control measures in the form of changes in design, including install the appropriate colored tag, digital indicator and warning lights which must be used base on the kind of system consequently, of this study showed that SHEPA can be an efficientmethod to study humanness in operational site.

  8. A systematic review of studies evaluating diffusion and dissemination of selected cancer control interventions.

    Science.gov (United States)

    Ellis, Peter; Robinson, Paula; Ciliska, Donna; Armour, Tanya; Brouwers, Melissa; O'Brien, Mary Ann; Sussman, Jonathan; Raina, Parminder

    2005-09-01

    With this review, the authors sought to determine what strategies have been evaluated (including the outcomes assessed) to disseminate cancer control interventions that promote the uptake of behavior change. Five topic areas along the cancer care continuum (smoking cessation, healthy diet, mammography, cervical cancer screening, and control of cancer pain) were selected to be representative. A systematic review was conducted of primary studies evaluating dissemination of a cancer control intervention. Thirty-one studies were identified that evaluated dissemination strategies in the 5 topic areas. No strong evidence currently exists to recommend any one dissemination strategy as effective in promoting the uptake of cancer control interventions. The authors conclude that there is a strong need for more research into dissemination of cancer control interventions. Future research should consider methodological issues such as the most appropriate study design and outcomes to be evaluated. (c) 2005 APA, all rights reserved

  9. Dental implant survival rate in well-controlled diabetic patients. A systematic review.

    Directory of Open Access Journals (Sweden)

    Heber Arbildo

    2015-12-01

    Full Text Available Background: Dental implants have now become one of the most popular options for replacing a missing tooth. On the other hand, diabetes mellitus is a systemic disease that affects a large part of the population and is generally considered an absolute or relative contraindication to implant therapy. Aim: To determine the survival rate of dental implants in controlled diabetic patients through a systematic review. Material and methods: A systematic search in Pubmed, SciELO and RedALyC databases was performed. The selection criteria were: studies published in the last 10 years, with at least 20 controlled diabetic patients, reporting survival rate and number of implants placed, with follow-up periods equal to or longer than 1 year, including a control group of healthy patients. Methodological quality was analyzed with the follwing scales: Jadad and Downs & Black’s CMQ. Results: Three articles with a follow-up period between 1 and 12 years were analyzed. The overall survival rate of dental implants in diabetic controlled patients was 97.43%. Conclusion: The reviewed literature suggests that survival rate of dental implants in well-controlled diabetic patients is similar to non-diabetic patients.

  10. The potentiometric and laser RAMAN study of the hydrolysis of uranyl chloride under physiological conditions and the effect of systematic and random errors on the hydrolysis constants

    International Nuclear Information System (INIS)

    Deschenes, L.L.; Kramer, G.H.; Monserrat, K.J.; Robinson, P.A.

    1986-12-01

    The hydrolysis of uranyl ions in 0.15 mol/L (Na)C1 solution at 37 degrees Celsius has been studied by potentiometric titration. The results were consistent with the formation of (UO 2 ) 2 (OH) 2 , (UO 2 ) 3 (OH) 4 , (UO 2 ) 3 (OH) 5 and (UO 2 ) 4 (OH) 7 . The stability constants, which were evaluated using a version of MINIQUAD, were found to be: log β 22 = -5.693 ± 0.007, log β 34 = -11.499 ± 0.024, log β 35 = -16.001 ± 0.050, log β 47 = -21.027 ± 0.051. Laser Raman spectroscopy has been used to identify the products including (UO 2 ) 4 (OH) 7 species. The difficulties in identifying the chemical species in solution and the effect of small errors on this selection has also been investigated by computer simulation. The results clearly indicate that small errors can lead to the selection of species that may not exist

  11. Veterinary homeopathy: systematic review of medical conditions studied by randomised trials controlled by other than placebo.

    Science.gov (United States)

    Mathie, Robert T; Clausen, Jürgen

    2015-09-15

    No systematic review has previously been carried out on randomised controlled trials (RCTs) of veterinary homeopathy in which the control group was an intervention other than placebo (OTP). For eligible peer-reviewed RCTs, the objectives of this study were to assess the risk of bias (RoB) and to quantify the effect size of homeopathic intervention compared with an active comparator or with no treatment. Our systematic review approach complied fully with the PRISMA 2009 Checklist. Cochrane methods were applied to assess RoB and to derive effect size using standard meta-analysis methods. Based on a thorough and systematic literature search, the following key attributes of the published research were distinguished: individualised homeopathy (n = 1 RCT)/non-individualised homeopathy (n = 19); treatment (n = 14)/prophylaxis (n = 6); active controls (n = 18)/untreated controls (n = 2). The trials were highly diverse, representing 12 different medical conditions in 6 different species. No trial had sufficiently low RoB to be judged as reliable evidence: 16 of the 20 RCTs had high RoB; the remaining four had uncertain RoB in several domains of assessment. For three trials with uncertain RoB and without overt vested interest, it was inconclusive whether homeopathy combined with conventional intervention was more or was less effective than conventional intervention alone for modulation of immune response in calves, or in the prophylaxis of cattle tick or of diarrhoea in piglets. Due to the poor reliability of their data, OTP-controlled trials do not currently provide useful insight into the effectiveness of homeopathy in animals.

  12. Can the error detection mechanism benefit from training the working memory? A comparison between dyslexics and controls--an ERP study.

    Directory of Open Access Journals (Sweden)

    Tzipi Horowitz-Kraus

    Full Text Available BACKGROUND: Based on the relationship between working memory and error detection, we investigated the capacity of adult dyslexic readers' working memory to change as a result of training, and the impact of training on the error detection mechanism. METHODOLOGY: 27 dyslexics and 34 controls, all university students, participated in the study. ERP methodology and behavioral measures were employed prior to, immediately after, and 6 months after training. The CogniFit Personal Coach Program, which consists of 24 sessions of direct training of working memory skills, was used. FINDINGS: Both groups of readers gained from the training program but the dyslexic readers gained significantly more. In the dyslexic group, digit span increased from 9.84+/-3.15 to 10.79+/-3.03. Working memory training significantly increased the number of words per minute read correctly by 14.73%. Adult brain activity changed as a result of training, evidenced by an increase in both working memory capacity and the amplitude of the Error-related Negativity (ERN component (24.71%. When ERN amplitudes increased, the percentage of errors on the Sternberg tests decreased. CONCLUSIONS: We suggest that by expanding the working memory capacity, larger units of information are retained in the system, enabling more effective error detection. The crucial functioning of the central-executive as a sub-component of the working memory is also discussed.

  13. The Motion Path Study of Measuring Robot Based on Variable Universe Fuzzy Control

    Directory of Open Access Journals (Sweden)

    Ma Guoqing

    2017-01-01

    Full Text Available For the problem of measuring robot requires a higher positioning, firstly learning about the error overview of the system, analysised the influence of attitude, speed and other factors on systematic errors. Then collected and analyzed the systematic error curve in the track to complete the planning process. The last adding fuzzy control in both cases, by comparing with the original system, can found that the method based on fuzzy control system can significantly reduce the error during the motion.

  14. The Association between Hearing Loss, Postural Control, and Mobility in Older Adults: A Systematic Review.

    Science.gov (United States)

    Agmon, Maayan; Lavie, Limor; Doumas, Michail

    2017-06-01

    Degraded hearing in older adults has been associated with reduced postural control and higher risk of falls. Both hearing loss (HL) and falls have dramatic effects on older persons' quality of life (QoL). A large body of research explored the comorbidity between the two domains. The aim of the current review is to describe the comorbidity between HL and objective measures of postural control, to offer potential mechanisms underlying this relationship, and to discuss the clinical implications of this comorbidity. PubMed and Google Scholar were systematically searched for articles published in English up until October 15, 2015, using combinations of the following strings and search words: for hearing: Hearing loss, "Hearing loss," hearing, presbycusis; for postural control: postural control, gait, postural balance, fall, walking; and for age: elderly, older adults. Of 211 screened articles, 7 were included in the systematic review. A significant, positive association between HL and several objective measures of postural control was found in all seven studies, even after controlling for major covariates. Severity of hearing impairment was connected to higher prevalence of difficulties in walking and falls. Physiological, cognitive, and behavioral processes that may influence auditory system and postural control were suggested as potential explanations for the association between HL and postural control. There is evidence for the independent relationship between HL and objective measures of postural control in the elderly. However, a more comprehensive understanding of the mechanisms underlying this relationship is yet to be elucidated. Concurrent diagnosis, treatment, and rehabilitation of these two modalities may reduce falls and increase QoL in older adults. American Academy of Audiology

  15. Properties of permutation-based gene tests and controlling type 1 error using a summary statistic based gene test.

    Science.gov (United States)

    Swanson, David M; Blacker, Deborah; Alchawa, Taofik; Ludwig, Kerstin U; Mangold, Elisabeth; Lange, Christoph

    2013-11-07

    The advent of genome-wide association studies has led to many novel disease-SNP associations, opening the door to focused study on their biological underpinnings. Because of the importance of analyzing these associations, numerous statistical methods have been devoted to them. However, fewer methods have attempted to associate entire genes or genomic regions with outcomes, which is potentially more useful knowledge from a biological perspective and those methods currently implemented are often permutation-based. One property of some permutation-based tests is that their power varies as a function of whether significant markers are in regions of linkage disequilibrium (LD) or not, which we show from a theoretical perspective. We therefore develop two methods for quantifying the degree of association between a genomic region and outcome, both of whose power does not vary as a function of LD structure. One method uses dimension reduction to "filter" redundant information when significant LD exists in the region, while the other, called the summary-statistic test, controls for LD by scaling marker Z-statistics using knowledge of the correlation matrix of markers. An advantage of this latter test is that it does not require the original data, but only their Z-statistics from univariate regressions and an estimate of the correlation structure of markers, and we show how to modify the test to protect the type 1 error rate when the correlation structure of markers is misspecified. We apply these methods to sequence data of oral cleft and compare our results to previously proposed gene tests, in particular permutation-based ones. We evaluate the versatility of the modification of the summary-statistic test since the specification of correlation structure between markers can be inaccurate. We find a significant association in the sequence data between the 8q24 region and oral cleft using our dimension reduction approach and a borderline significant association using the

  16. Herbal medicine for idiopathic central precocious puberty: A protocol for a systematic review of controlled trials.

    Science.gov (United States)

    Lee, Hye Lim; Lee, Yoo Been; Choi, Jun-Yong; Lee, Ju Ah

    2018-03-01

    Herbal medicine is widely used in East Asia to treat idiopathic central precocious puberty (ICPP). Most of the available clinical trials that investigated herbal medicine for ICPP have been included in this review. This systematic review will assess the efficacy and safety of herbal medicine for ICPP. Eleven databases, including Asian databases, will be searched for studies conducted through 2018. We will include randomized controlled trials assessing herbal medicine for ICPP. The risk of bias will be evaluated using the Cochrane risk of bias assessment tool, and confidence in the cumulative evidence will be evaluated using the Grading of Recommendations Assessment, Development, and Evaluation instrument. This systematic review will be published in a peer-reviewed journal and disseminated both electronically and in print. The review will be updated to inform and guide health care practices. PROSPER 2018 CRD42018087988.

  17. Explicit control of image noise and error properties in cone-beam microtomography using dual concentric circular source loci

    International Nuclear Information System (INIS)

    Davis, Graham

    2005-01-01

    Cone-beam reconstruction from projections with a circular source locus (relative to the specimen) is commonly used in X-ray microtomography systems. Although this method does not provide an 'exact' reconstruction, since there is insufficient data in the projections, the approximation is considered adequate for many purposes. However, some specimens, with sharp changes in X-ray attenuation in the direction of the rotation axis, are particularly prone to cone-beam-related errors. These errors can be reduced by increasing the source-to-specimen distance, but at the expense of reduced signal-to-noise ratio or increased scanning time. An alternative method, based on heuristic arguments, is to scan the specimen with both short and long source-to-specimen distances and combine high frequency components from the former reconstruction with low frequency ones from the latter. This composite reconstruction has the low noise characteristics of the short source-to-specimen reconstruction and the low cone-beam errors of the long one. This has been tested with simulated data representing a particularly error prone specimen

  18. Overview of systematic reviews on the health-related effects of government tobacco control policies.

    Science.gov (United States)

    Hoffman, Steven J; Tan, Charlie

    2015-08-05

    Government interventions are critical to addressing the global tobacco epidemic, a major public health problem that continues to deepen. We systematically synthesize research evidence on the effectiveness of government tobacco control policies promoted by the Framework Convention on Tobacco Control (FCTC), supporting the implementation of this international treaty on the tenth anniversary of it entering into force. An overview of systematic reviews was prepared through systematic searches of five electronic databases, published up to March 2014. Additional reviews were retrieved from monthly updates until August 2014, consultations with tobacco control experts and a targeted search for reviews on mass media interventions. Reviews were assessed according to predefined inclusion criteria, and ratings of methodological quality were either extracted from source databases or independently scored. Of 612 reviews retrieved, 45 reviews met the inclusion criteria and 14 more were identified from monthly updates, expert consultations and a targeted search, resulting in 59 included reviews summarizing over 1150 primary studies. The 38 strong and moderate quality reviews published since 2000 were prioritized in the qualitative synthesis. Protecting people from tobacco smoke was the most strongly supported government intervention, with smoke-free policies associated with decreased smoking behaviour, secondhand smoke exposure and adverse health outcomes. Raising taxes on tobacco products also consistently demonstrated reductions in smoking behaviour. Tobacco product packaging interventions and anti-tobacco mass media campaigns may decrease smoking behaviour, with the latter likely an important part of larger multicomponent programs. Financial interventions for smoking cessation are most effective when targeted at smokers to reduce the cost of cessation products, but incentivizing quitting may be effective as well. Although the findings for bans on tobacco advertising were

  19. The analysis and compensation of errors of precise simple harmonic motion control under high speed and large load conditions based on servo electric cylinder

    Science.gov (United States)

    Ma, Chen-xi; Ding, Guo-qing

    2017-10-01

    Simple harmonic waves and synthesized simple harmonic waves are widely used in the test of instruments. However, because of the errors caused by clearance of gear and time-delay error of FPGA, it is difficult to control servo electric cylinder in precise simple harmonic motion under high speed, high frequency and large load conditions. To solve the problem, a method of error compensation is proposed in this paper. In the method, a displacement sensor is fitted on the piston rod of the electric cylinder. By using the displacement sensor, the real-time displacement of the piston rod is obtained and fed back to the input of servo motor, then a closed loop control is realized. There is compensation of pulses in the next period of the synthetic waves. This paper uses FPGA as the processing core. The software mainly comprises a waveform generator, an Ethernet module, a memory module, a pulse generator, a pulse selector, a protection module, an error compensation module. A durability of shock absorbers is used as the testing platform. The durability mainly comprises a single electric cylinder, a servo motor for driving the electric cylinder, and the servo motor driver.

  20. Glycemic control strategies and the occurrence of surgical site infection: a systematic review.

    Science.gov (United States)

    Domingos, Caroline Maria Herrero; Iida, Luciana Inaba Senyer; Poveda, Vanessa de Brito

    2016-01-01

    To analyze the evidence available in the scientific literature regarding the relationship between the glycemic control strategies used and the occurrence of surgical site infection in adult patients undergoing surgery. This is a systematic review performed through search on the databases of CINAHL, MEDLINE, LILACS, Cochrane Database of Systematic Reviews and EMBASE. Eight randomized controlled trials were selected. Despite the diversity of tested interventions, studies agree that glycemic control is essential to reduce rates of surgical site infection, and should be maintained between 80 and 120 mg/dL during the perioperative period. Compared to other strategies, insulin continuous infusion during surgery was the most tested and seems to get better results in reducing rates of surgical site infection and achieving success in glycemic control. Tight glycemic control during the perioperative period benefits the recovery of surgical patients, and the role of the nursing team is key for the successful implementation of the measure. Analisar as evidências disponíveis na literatura científica sobre a relação entre as estratégias de controle glicêmico efetuadas e a ocorrência de infecção do sítio cirúrgico em pacientes adultos submetidos à cirurgia. Trata-se de revisão sistemática, por meio das bases de dados CINAHL, MEDLINE, LILACS, Cochrane Database of Systematic Reviews e EMBASE. Foram selecionados oito ensaios clínicos randomizados. Apesar da diversidade de intervenções testadas, os estudos concordam que o controle glicêmico é essencial para a redução das taxas de infecção do sítio cirúrgico e deve ser mantido entre 80 e 120 mg/dL durante o perioperatório. A infusão contínua de insulina no transoperatório foi a mais testada e parece obter melhores resultados na redução das taxas de infecção do sítio cirúrgico e sucesso no controle glicêmico comparada às demais estratégias. O controle glicêmico rigoroso durante o perioperat

  1. Neural correlates of cognitive control in gambling disorder: a systematic review of fMRI studies.

    Science.gov (United States)

    Moccia, Lorenzo; Pettorruso, Mauro; De Crescenzo, Franco; De Risio, Luisa; di Nuzzo, Luigi; Martinotti, Giovanni; Bifone, Angelo; Janiri, Luigi; Di Nicola, Marco

    2017-07-01

    Decreased cognitive control over the urge to be involved in gambling activities is a core feature of Gambling Disorder (GD). Cognitive control can be differentiated into several cognitive sub-processes pivotal in GD clinical phenomenology, such as response inhibition, conflict monitoring, decision-making, and cognitive flexibility. This article aims to systematically review fMRI studies, which investigated the neural mechanisms underlying diminished cognitive control in GD. We conducted a comprehensive literature search and collected neuropsychological and neuroimaging data investigating cognitive control in GD. We included a total of 14 studies comprising 499 individuals. Our results indicate that impaired activity in prefrontal cortex may account for decreased cognitive control in GD, contributing to the progressive loss of control over gambling urges. Among prefrontal regions, orbital and ventromedial areas seem to be a possible nexus for sensory integration, value-based decision-making and emotional processing, thus contributing to both motivational and affective aspects of cognitive control. Finally, we discussed possible therapeutic approaches aimed at the restoration of cognitive control in GD, including pharmacological and brain stimulation treatments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Can Team-Based Care Improve Patient Satisfaction? A Systematic Review of Randomized Controlled Trials

    Science.gov (United States)

    Wen, Jin; Schulman, Kevin A.

    2014-01-01

    Background Team-based approaches to patient care are a relatively recent innovation in health care delivery. The effectiveness of these approaches on patient outcomes has not been well documented. This paper reports a systematic review of the relationship between team-based care and patient satisfaction. Methods We searched MEDLINE, EMBASE, Cochrane Library, CINAHL, and PSYCHOINFO for eligible studies dating from inception to October 8, 2012. Eligible studies reported (1) a randomized controlled trial, (2) interventions including both team-based care and non-team-based care (or usual care), and (3) outcomes including an assessment of patient satisfaction. Articles with different settings between intervention and control were excluded, as were trial protocols. The reference lists of retrieved papers were also evaluated for inclusion. Results The literature search yielded 319 citations, of which 77 were screened for further full-text evaluation. Of these, 27 articles were included in the systematic review. The 26 trials with a total of 15,526 participants were included in this systematic review. The pooling result of dichotomous data (number of studies: 10) showed that team-based care had a positive effect on patient satisfaction compared with usual care (odds ratio, 2.09; 95% confidence interval, 1.54 to 2.84); however, combined continuous data (number of studies: 7) demonstrated that there was no significant difference in patient satisfaction between team-based care and usual care (standardized mean difference, −0.02; 95% confidence interval, −0.40 to 0.36). Conclusions Some evidence showed that team-based care is better than usual care in improving patient satisfaction. However, considering the pooling result of continuous data, along with the suboptimal quality of included trials, further large-scale and high-quality randomized controlled trials comparing team-based care and usual care are needed. PMID:25014674

  3. Strategies for Control of Moniliophthora roreri and Moniliophthora perniciosa in Theobroma cacao L.: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Paola Andrea Tirado-Gallego,

    2016-09-01

    Full Text Available One of the most important limitations of cocoa production worldwide is primarily diseases caused by the pathogenic fungi of the genus Moniliophthora sp., especially, Moniliophthora roreri and Moniliophthora perniciosa, causing moniliasis and the witches' broom disease, respectively; both diseases are highly invasive and endemic in cocoa. The objective of this study was to describe the control strategies that can be used to handle the moniliasis and witches' broom diseases. This study was conducted in accor-dance with the Prisma (Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement, for which a systematic literature search in the ScienceDirect, Springer Link and Scopus databases was used. Original investigation articles over the last 12 years were included, inclusion and exclusion criteria were also applied. In countries like Brazil and Costa Rica, the year with most reports of related articles was 2008. The most used strategies for the disease control are the phytosanitation, the copper-based fungicides and biologic agents control of fungus and bacteria, specially, Trichoderma sp. and Bacillus sp. One of the most recommended methodologies on the studied articles was the optimization of treatments employing the combination of physical, biological and chemical agents

  4. Interventions to improve hemodialysis adherence: a systematic review of randomized-controlled trials.

    Science.gov (United States)

    Matteson, Michelle L; Russell, Cynthia

    2010-10-01

    Over 485,000 people in the United States have chronic kidney disease, a progressive kidney disease that may lead to hemodialysis. Hemodialysis involves a complex regimen of treatment, medication, fluid, and diet management. In 2005, over 312,000 patients were undergoing hemodialysis in the United States. Dialysis nonadherence rates range from 8.5% to 86%. Dialysis therapy treatment nonadherence, including treatment, medication, fluid, and diet nonadherence, significantly increases the risk of morbidity and mortality. The purpose of this paper is to systematically review randomized-controlled trial intervention studies designed to increase treatment, medication, fluid, and diet adherence in adult hemodialysis patients. A search of Cumulative Index of Nursing and Allied Health Literature (CINAHL) (1982 to May 2008), MEDLINE (1950 to May 2008), PsycINFO (1806 to May 2008), and all Evidence-Based Medicine (EBM) Reviews (Cochran DSR, ACP Journal Club, DARE, and CCTR) was conducted to identify randomized-controlled studies that tested the efficacy of interventions to improve adherence in adult hemodialysis patients. Eight randomized-controlled trials met criteria for inclusion. Six of the 8 studies found statistically significant improvement in adherence with the intervention. Of these 6 intervention studies, all studies had a cognitive component, with 3 studies utilizing cognitive/behavioral intervention strategies. Based on this systematic review, interventions utilizing a cognitive or cognitive/behavioral component appear to show the most promise for future study. © 2010 The Authors. Hemodialysis International © 2010 International Society for Hemodialysis.

  5. Assessment on tracking error performance of Cascade P/PI, NPID and N-Cascade controller for precise positioning of xy table ballscrew drive system

    International Nuclear Information System (INIS)

    Abdullah, L; Jamaludin, Z; Rafan, N A; Jamaludin, J; Chiew, T H

    2013-01-01

    At present, positioning plants in machine tools are looking for high degree of accuracy and robustness attributes for the purpose of compensating various disturbance forces. The objective of this paper is to assess the tracking performance of Cascade P/PI, Nonlinear PID (NPID) and Nonlinear cascade (N-Cascade) controller with the existence of disturbance forces in the form of cutting forces. Cutting force characteristics at different cutting parameters; such as spindle speed rotations is analysed using Fast Fourier Transform. The tracking performance of a Nonlinear cascade controller in presence of these cutting forces is compared with NPID controller and Cascade P/PI controller. Robustness of these controllers in compensating different cutting characteristics is compared based on reduction in the amplitudes of cutting force harmonics using Fast Fourier Transform. It is found that the N-cascade controller performs better than both NPID controller and Cascade P/PI controller. The average percentage error reduction between N-cascade controller and Cascade P/PI controller is about 65% whereas the average percentage error reduction between cascade controller and NPID controller is about 82% at spindle speed of 3000 rpm spindle speed rotation. The finalized design of N-cascade controller could be utilized further for machining application such as milling process. The implementation of N-cascade in machine tools applications will increase the quality of the end product and the productivity in industry by saving the machining time. It is suggested that the range of the spindle speed could be made wider to accommodate the needs for high speed machining

  6. The effect of speaking rate on serial-order sound-level errors in normal healthy controls and persons with aphasia.

    Science.gov (United States)

    Fossett, Tepanta R D; McNeil, Malcolm R; Pratt, Sheila R; Tompkins, Connie A; Shuster, Linda I

    Although many speech errors can be generated at either a linguistic or motoric level of production, phonetically well-formed sound-level serial-order errors are generally assumed to result from disruption of phonologic encoding (PE) processes. An influential model of PE (Dell, 1986; Dell, Burger & Svec, 1997) predicts that speaking rate should affect the relative proportion of these serial-order sound errors (anticipations, perseverations, exchanges). These predictions have been extended to, and have special relevance for persons with aphasia (PWA) because of the increased frequency with which speech errors occur and because their localization within the functional linguistic architecture may help in diagnosis and treatment. Supporting evidence regarding the effect of speaking rate on phonological encoding has been provided by studies using young normal language (NL) speakers and computer simulations. Limited data exist for older NL users and no group data exist for PWA. This study tested the phonologic encoding properties of Dell's model of speech production (Dell, 1986; Dell,et al., 1997), which predicts that increasing speaking rate affects the relative proportion of serial-order sound errors (i.e., anticipations, perseverations, and exchanges). The effects of speech rate on the error ratios of anticipation/exchange (AE), anticipation/perseveration (AP) and vocal reaction time (VRT) were examined in 16 normal healthy controls (NHC) and 16 PWA without concomitant motor speech disorders. The participants were recorded performing a phonologically challenging (tongue twister) speech production task at their typical and two faster speaking rates. A significant effect of increased rate was obtained for the AP but not the AE ratio. Significant effects of group and rate were obtained for VRT. Although the significant effect of rate for the AP ratio provided evidence that changes in speaking rate did affect PE, the results failed to support the model derived predictions

  7. Study of a New Method of Tracking Control with Zero Steady-State Error on Very-Low Frequency

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A servo control system is prone to low speed and unsteadiness during very-low-frequency follow-up. A design method of feedforward control based on intelligent controller is put foward. Simulation and test results show that the method has excellent control characteristics and strong robustness, which meets the servo control needs with very-low frequency.

  8. An Adaptive Traffic Signal Control in a Connected Vehicle Environment: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Peng Jing

    2017-08-01

    Full Text Available In the last few years, traffic congestion has become a growing concern due to increasing vehicle ownerships in urban areas. Intersections are one of the major bottlenecks that contribute to urban traffic congestion. Traditional traffic signal control systems cannot adjust the timing pattern depending on road traffic demand. This results in excessive delays for road users. Adaptive traffic signal control in a connected vehicle environment has shown a powerful ability to effectively alleviate urban traffic congestions to achieve desirable objectives (e.g., delay minimization. Connected vehicle technology, as an emerging technology, is a mobile data platform that enables the real-time data exchange among vehicles and between vehicles and infrastructure. Although several reviews about traffic signal control or connected vehicles have been written, a systemic review of adaptive traffic signal control in a connected vehicle environment has not been made. Twenty-six eligible studies searched from six databases constitute the review. A quality evaluation was established based on previous research instruments and applied to the current review. The purpose of this paper is to critically review the existing methods of adaptive traffic signal control in a connected vehicle environment and to compare the advantages or disadvantages of those methods. Further, a systematic framework on connected vehicle based adaptive traffic signal control is summarized to support the future research. Future research is needed to develop more efficient and generic adaptive traffic signal control methods in a connected vehicle environment.

  9. Effectiveness of Azadirachta indica (neem) mouthrinse in plaque and gingivitis control: a systematic review.

    Science.gov (United States)

    Dhingra, K; Vandana, K L

    2017-02-01

    The aim of this systematic review was to evaluate the effectiveness of Azadirachta indica (neem)-based herbal mouthrinse in improving plaque control and gingival health. Literature search was accomplished using electronic databases (PubMed, Cochrane Central Register of Controlled Trials and EMBASE) and manual searching, up to February 2015, for randomized controlled trials (RCTs) presenting clinical data for efficacy of neem mouthrinses when used alone or as an adjunct to mechanical oral hygiene as compared to chlorhexidine mouthrinses for controlling plaque and gingival inflammation in patients with gingivitis. Of the total 206 articles searched, three randomized controlled trials evaluating neem-based herbal mouthrinses were included. Due to marked heterogeneity observed in study characteristics, meta-analysis was not performed. These studies reported that neem mouthrinse was as effective as chlorhexidine mouthrinse when used as an adjunct to toothbrushing in reducing plaque and gingival inflammation in gingivitis patients. However, the quality of reporting and evidence along with methods of studies was generally flawed with unclear risk of bias. Despite the promising results shown in ex