WorldWideScience

Sample records for relevant error sources

  1. Water displacement leg volumetry in clinical studies - A discussion of error sources

    Science.gov (United States)

    2010-01-01

    Background Water displacement leg volumetry is a highly reproducible method, allowing the confirmation of efficacy of vasoactive substances. Nevertheless errors of its execution and the selection of unsuitable patients are likely to negatively affect the outcome of clinical studies in chronic venous insufficiency (CVI). Discussion Placebo controlled double-blind drug studies in CVI were searched (Cochrane Review 2005, MedLine Search until December 2007) and assessed with regard to efficacy (volume reduction of the leg), patient characteristics, and potential methodological error sources. Almost every second study reported only small drug effects (≤ 30 mL volume reduction). As the most relevant error source the conduct of volumetry was identified. Because the practical use of available equipment varies, volume differences of more than 300 mL - which is a multifold of a potential treatment effect - have been reported between consecutive measurements. Other potential error sources were insufficient patient guidance or difficulties with the transition from the Widmer CVI classification to the CEAP (Clinical Etiological Anatomical Pathophysiological) grading. Summary Patients should be properly diagnosed with CVI and selected for stable oedema and further clinical symptoms relevant for the specific study. Centres require a thorough training on the use of the volumeter and on patient guidance. Volumetry should be performed under constant conditions. The reproducibility of short term repeat measurements has to be ensured. PMID:20070899

  2. Human error theory: relevance to nurse management.

    Science.gov (United States)

    Armitage, Gerry

    2009-03-01

    Describe, discuss and critically appraise human error theory and consider its relevance for nurse managers. Healthcare errors are a persistent threat to patient safety. Effective risk management and clinical governance depends on understanding the nature of error. This paper draws upon a wide literature from published works, largely from the field of cognitive psychology and human factors. Although the content of this paper is pertinent to any healthcare professional; it is written primarily for nurse managers. Error is inevitable. Causation is often attributed to individuals, yet causation in complex environments such as healthcare is predominantly multi-factorial. Individual performance is affected by the tendency to develop prepacked solutions and attention deficits, which can in turn be related to local conditions and systems or latent failures. Blame is often inappropriate. Defences should be constructed in the light of these considerations and to promote error wisdom and organizational resilience. Managing and learning from error is seen as a priority in the British National Health Service (NHS), this can be better achieved with an understanding of the roots, nature and consequences of error. Such an understanding can provide a helpful framework for a range of risk management activities.

  3. A posteriori error estimates in voice source recovery

    Science.gov (United States)

    Leonov, A. S.; Sorokin, V. N.

    2017-12-01

    The inverse problem of voice source pulse recovery from a segment of a speech signal is under consideration. A special mathematical model is used for the solution that relates these quantities. A variational method of solving inverse problem of voice source recovery for a new parametric class of sources, that is for piecewise-linear sources (PWL-sources), is proposed. Also, a technique for a posteriori numerical error estimation for obtained solutions is presented. A computer study of the adequacy of adopted speech production model with PWL-sources is performed in solving the inverse problems for various types of voice signals, as well as corresponding study of a posteriori error estimates. Numerical experiments for speech signals show satisfactory properties of proposed a posteriori error estimates, which represent the upper bounds of possible errors in solving the inverse problem. The estimate of the most probable error in determining the source-pulse shapes is about 7-8% for the investigated speech material. It is noted that a posteriori error estimates can be used as a criterion of the quality for obtained voice source pulses in application to speaker recognition.

  4. Probable sources of errors in radiation therapy (abstract)

    International Nuclear Information System (INIS)

    Khan, U.H.

    1998-01-01

    It is fact that some errors are always in dose-volume prescription, management of radiation beam, derivation of exposure, planning the treatment and finally the treatment of the patient ( a three dimensional subject). This paper highlights all the sources of error and relevant methods to decrease or eliminate them, thus improving the over-all therapeutic efficiency and accuracy. It is a comprehensive teamwork of the radiotherapist, medical radiation physicist, medical technologist and the patient. All the links, in the whole chain of radiotherapy, are equally important and duly considered in the paper. The decision for Palliative or Radical treatment is based on the nature and extent disease, site, stage, grade, length of the history of condition and biopsy reports etc. This may entail certain uncertainties in Volume of tumor, quality and quantity of radiation and dose fractionation etc, which may be under or over-estimated. An effort has been made to guide the radiotherapist in avoiding the pitfalls in the arena of radiotherapy. (author)

  5. Clinical relevance of and risk factors associated with medication administration time errors

    NARCIS (Netherlands)

    Teunissen, R.; Bos, J.; Pot, H.; Pluim, M.; Kramers, C.

    2013-01-01

    PURPOSE: The clinical relevance of and risk factors associated with errors related to medication administration time were studied. METHODS: In this explorative study, 66 medication administration rounds were studied on two wards (surgery and neurology) of a hospital. Data on medication errors were

  6. Spelling Errors of Iranian School-Level EFL Learners: Potential Sources

    Directory of Open Access Journals (Sweden)

    Mahnaz Saeidi

    2010-05-01

    Full Text Available With the purpose of examining the sources of spelling errors of Iranian school level EFL learners, the present researchers analyzed the dictation samples of 51 Iranian senior and junior high school male and female students majoring at an Iranian school in Baku, Azerbaijan. The content analysis of the data revealed three main sources (intralingual, interlingual, and unique with seven patterns of errors. The frequency of intralingual errors far outnumbers that of interlingual errors. Unique errors were even less. Therefore, in-service training programs may include some instruction on raising the teachers’ awareness of the different sources of errors to focus on during the teaching program.

  7. The error sources appearing for the gamma radioactive source measurement in dynamic condition

    International Nuclear Information System (INIS)

    Sirbu, M.

    1977-01-01

    The error analysis for the measurement of the gamma radioactive sources, placed on the soil, with the help of the helicopter are presented. The analysis is based on a new formula that takes account of the attenuation gamma ray factor in the helicopter walls. They give a complete error formula and an error diagram. (author)

  8. Identification of 'Point A' as the prevalent source of error in cephalometric analysis of lateral radiographs.

    Science.gov (United States)

    Grogger, P; Sacher, C; Weber, S; Millesi, G; Seemann, R

    2018-04-10

    Deviations in measuring dentofacial components in a lateral X-ray represent a major hurdle in the subsequent treatment of dysgnathic patients. In a retrospective study, we investigated the most prevalent source of error in the following commonly used cephalometric measurements: the angles Sella-Nasion-Point A (SNA), Sella-Nasion-Point B (SNB) and Point A-Nasion-Point B (ANB); the Wits appraisal; the anteroposterior dysplasia indicator (APDI); and the overbite depth indicator (ODI). Preoperative lateral radiographic images of patients with dentofacial deformities were collected and the landmarks digitally traced by three independent raters. Cephalometric analysis was automatically performed based on 1116 tracings. Error analysis identified the x-coordinate of Point A as the prevalent source of error in all investigated measurements, except SNB, in which it is not incorporated. In SNB, the y-coordinate of Nasion predominated error variance. SNB showed lowest inter-rater variation. In addition, our observations confirmed previous studies showing that landmark identification variance follows characteristic error envelopes in the highest number of tracings analysed up to now. Variance orthogonal to defining planes was of relevance, while variance parallel to planes was not. Taking these findings into account, orthognathic surgeons as well as orthodontists would be able to perform cephalometry more accurately and accomplish better therapeutic results. Copyright © 2018 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  9. Sources of variability and systematic error in mouse timing behavior.

    Science.gov (United States)

    Gallistel, C R; King, Adam; McDonald, Robert

    2004-01-01

    In the peak procedure, starts and stops in responding bracket the target time at which food is expected. The variability in start and stop times is proportional to the target time (scalar variability), as is the systematic error in the mean center (scalar error). The authors investigated the source of the error and the variability, using head poking in the mouse, with target intervals of 5 s, 15 s, and 45 s, in the standard procedure, and in a variant with 3 different target intervals at 3 different locations in a single trial. The authors conclude that the systematic error is due to the asymmetric location of start and stop decision criteria, and the scalar variability derives primarily from sources other than memory.

  10. The VTTVIS line imaging spectrometer - principles, error sources, and calibration

    DEFF Research Database (Denmark)

    Jørgensen, R.N.

    2002-01-01

    work describing the basic principles, potential error sources, and/or adjustment and calibration procedures. This report fulfils the need for such documentationwith special focus on the system at KVL. The PGP based system has several severe error sources, which should be removed prior any analysis......Hyperspectral imaging with a spatial resolution of a few mm2 has proved to have a great potential within crop and weed classification and also within nutrient diagnostics. A commonly used hyperspectral imaging system is based on the Prism-Grating-Prism(PGP) principles produced by Specim Ltd...... in off-axis transmission efficiencies, diffractionefficiencies, and image distortion have a significant impact on the instrument performance. Procedures removing or minimising these systematic error sources are developed and described for the system build at KVL but can be generalised to other PGP...

  11. Optical linear algebra processors - Noise and error-source modeling

    Science.gov (United States)

    Casasent, D.; Ghosh, A.

    1985-01-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAPs) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  12. Optical linear algebra processors: noise and error-source modeling.

    Science.gov (United States)

    Casasent, D; Ghosh, A

    1985-06-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAP's) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  13. The use of source memory to identify one's own episodic confusion errors.

    Science.gov (United States)

    Smith, S M; Tindell, D R; Pierce, B H; Gilliland, T R; Gerkens, D R

    2001-03-01

    In 4 category cued recall experiments, participants falsely recalled nonlist common members, a semantic confusion error. Errors were more likely if critical nonlist words were presented on an incidental task, causing source memory failures called episodic confusion errors. Participants could better identify the source of falsely recalled words if they had deeply processed the words on the incidental task. For deep but not shallow processing, participants could reliably include or exclude incidentally shown category members in recall. The illusion that critical items actually appeared on categorized lists was diminished but not eradicated when participants identified episodic confusion errors post hoc among their own recalled responses; participants often believed that critical items had been on both the incidental task and the study list. Improved source monitoring can potentially mitigate episodic (but not semantic) confusion errors.

  14. Sources of medical error in refractive surgery.

    Science.gov (United States)

    Moshirfar, Majid; Simpson, Rachel G; Dave, Sonal B; Christiansen, Steven M; Edmonds, Jason N; Culbertson, William W; Pascucci, Stephen E; Sher, Neal A; Cano, David B; Trattler, William B

    2013-05-01

    To evaluate the causes of laser programming errors in refractive surgery and outcomes in these cases. In this multicenter, retrospective chart review, 22 eyes of 18 patients who had incorrect data entered into the refractive laser computer system at the time of treatment were evaluated. Cases were analyzed to uncover the etiology of these errors, patient follow-up treatments, and final outcomes. The results were used to identify potential methods to avoid similar errors in the future. Every patient experienced compromised uncorrected visual acuity requiring additional intervention, and 7 of 22 eyes (32%) lost corrected distance visual acuity (CDVA) of at least one line. Sixteen patients were suitable candidates for additional surgical correction to address these residual visual symptoms and six were not. Thirteen of 22 eyes (59%) received surgical follow-up treatment; nine eyes were treated with contact lenses. After follow-up treatment, six patients (27%) still had a loss of one line or more of CDVA. Three significant sources of error were identified: errors of cylinder conversion, data entry, and patient identification error. Twenty-seven percent of eyes with laser programming errors ultimately lost one or more lines of CDVA. Patients who underwent surgical revision had better outcomes than those who did not. Many of the mistakes identified were likely avoidable had preventive measures been taken, such as strict adherence to patient verification protocol or rigorous rechecking of treatment parameters. Copyright 2013, SLACK Incorporated.

  15. Detection of anomalies in radio tomography of asteroids: Source count and forward errors

    Science.gov (United States)

    Pursiainen, S.; Kaasalainen, M.

    2014-09-01

    The purpose of this study was to advance numerical methods for radio tomography in which asteroid's internal electric permittivity distribution is to be recovered from radio frequency data gathered by an orbiter. The focus was on signal generation via multiple sources (transponders) providing one potential, or even essential, scenario to be implemented in a challenging in situ measurement environment and within tight payload limits. As a novel feature, the effects of forward errors including noise and a priori uncertainty of the forward (data) simulation were examined through a combination of the iterative alternating sequential (IAS) inverse algorithm and finite-difference time-domain (FDTD) simulation of time evolution data. Single and multiple source scenarios were compared in two-dimensional localization of permittivity anomalies. Three different anomaly strengths and four levels of total noise were tested. Results suggest, among other things, that multiple sources can be necessary to obtain appropriate results, for example, to distinguish three separate anomalies with permittivity less or equal than half of the background value, relevant in recovery of internal cavities.

  16. Reduction of sources of error and simplification of the Carbon-14 urea breath test

    International Nuclear Information System (INIS)

    Bellon, M.S.

    1997-01-01

    Full text: Carbon-14 urea breath testing is established in the diagnosis of H. pylori infection. The aim of this study was to investigate possible further simplification and identification of error sources in the 14 C urea kit extensively used at the Royal Adelaide Hospital. Thirty six patients with validated H. pylon status were tested with breath samples taken at 10,15, and 20 min. Using the single sample value at 15 min, there was no change in the diagnostic category. Reduction or errors in analysis depends on attention to the following details: Stability of absorption solution, (now > 2 months), compatibility of scintillation cocktail/absorption solution. (with particular regard to photoluminescence and chemiluminescence), reduction in chemical quenching (moisture reduction), understanding counting hardware and relevance, and appropriate response to deviation in quality assurance. With this experience, we are confident of the performance and reliability of the RAPID-14 urea breath test kit now available commercially

  17. 50 CFR 424.13 - Sources of information and relevant data.

    Science.gov (United States)

    2010-10-01

    ... 50 Wildlife and Fisheries 7 2010-10-01 2010-10-01 false Sources of information and relevant data... Sources of information and relevant data. When considering any revision of the lists, the Secretary shall..., administrative reports, maps or other graphic materials, information received from experts on the subject, and...

  18. Syndrome-source-coding and its universal generalization. [error correcting codes for data compression

    Science.gov (United States)

    Ancheta, T. C., Jr.

    1976-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.

  19. Correlation of errors in the Monte Carlo fission source and the fission matrix fundamental-mode eigenvector

    International Nuclear Information System (INIS)

    Dufek, Jan; Holst, Gustaf

    2016-01-01

    Highlights: • Errors in the fission matrix eigenvector and fission source are correlated. • The error correlations depend on coarseness of the spatial mesh. • The error correlations are negligible when the mesh is very fine. - Abstract: Previous studies raised a question about the level of a possible correlation of errors in the cumulative Monte Carlo fission source and the fundamental-mode eigenvector of the fission matrix. A number of new methods tally the fission matrix during the actual Monte Carlo criticality calculation, and use its fundamental-mode eigenvector for various tasks. The methods assume the fission matrix eigenvector is a better representation of the fission source distribution than the actual Monte Carlo fission source, although the fission matrix and its eigenvectors do contain statistical and other errors. A recent study showed that the eigenvector could be used for an unbiased estimation of errors in the cumulative fission source if the errors in the eigenvector and the cumulative fission source were not correlated. Here we present new numerical study results that answer the question about the level of the possible error correlation. The results may be of importance to all methods that use the fission matrix. New numerical tests show that the error correlation is present at a level which strongly depends on properties of the spatial mesh used for tallying the fission matrix. The error correlation is relatively strong when the mesh is coarse, while the correlation weakens as the mesh gets finer. We suggest that the coarseness of the mesh is measured in terms of the value of the largest element in the tallied fission matrix as that way accounts for the mesh as well as system properties. In our test simulations, we observe only negligible error correlations when the value of the largest element in the fission matrix is about 0.1. Relatively strong error correlations appear when the value of the largest element in the fission matrix raises

  20. Vector velocity volume flow estimation: Sources of error and corrections applied for arteriovenous fistulas

    DEFF Research Database (Denmark)

    Jensen, Jonas; Olesen, Jacob Bjerring; Stuart, Matthias Bo

    2016-01-01

    radius. The error sources were also studied in vivo under realistic clinical conditions, and the theoretical results were applied for correcting the volume flow errors. Twenty dialysis patients with arteriovenous fistulas were scanned to obtain vector flow maps of fistulas. When fitting an ellipsis......A method for vector velocity volume flow estimation is presented, along with an investigation of its sources of error and correction of actual volume flow measurements. Volume flow errors are quantified theoretically by numerical modeling, through flow phantom measurements, and studied in vivo...

  1. Per-beam, planar IMRT QA passing rates do not predict clinically relevant patient dose errors

    International Nuclear Information System (INIS)

    Nelms, Benjamin E.; Zhen Heming; Tome, Wolfgang A.

    2011-01-01

    Purpose: The purpose of this work is to determine the statistical correlation between per-beam, planar IMRT QA passing rates and several clinically relevant, anatomy-based dose errors for per-patient IMRT QA. The intent is to assess the predictive power of a common conventional IMRT QA performance metric, the Gamma passing rate per beam. Methods: Ninety-six unique data sets were created by inducing four types of dose errors in 24 clinical head and neck IMRT plans, each planned with 6 MV Varian 120-leaf MLC linear accelerators using a commercial treatment planning system and step-and-shoot delivery. The error-free beams/plans were used as ''simulated measurements'' (for generating the IMRT QA dose planes and the anatomy dose metrics) to compare to the corresponding data calculated by the error-induced plans. The degree of the induced errors was tuned to mimic IMRT QA passing rates that are commonly achieved using conventional methods. Results: Analysis of clinical metrics (parotid mean doses, spinal cord max and D1cc, CTV D95, and larynx mean) vs IMRT QA Gamma analysis (3%/3 mm, 2/2, 1/1) showed that in all cases, there were only weak to moderate correlations (range of Pearson's r-values: -0.295 to 0.653). Moreover, the moderate correlations actually had positive Pearson's r-values (i.e., clinically relevant metric differences increased with increasing IMRT QA passing rate), indicating that some of the largest anatomy-based dose differences occurred in the cases of high IMRT QA passing rates, which may be called ''false negatives.'' The results also show numerous instances of false positives or cases where low IMRT QA passing rates do not imply large errors in anatomy dose metrics. In none of the cases was there correlation consistent with high predictive power of planar IMRT passing rates, i.e., in none of the cases did high IMRT QA Gamma passing rates predict low errors in anatomy dose metrics or vice versa. Conclusions: There is a lack of correlation between

  2. Adaptation to sensory-motor reflex perturbations is blind to the source of errors.

    Science.gov (United States)

    Hudson, Todd E; Landy, Michael S

    2012-01-06

    In the study of visual-motor control, perhaps the most familiar findings involve adaptation to externally imposed movement errors. Theories of visual-motor adaptation based on optimal information processing suppose that the nervous system identifies the sources of errors to effect the most efficient adaptive response. We report two experiments using a novel perturbation based on stimulating a visually induced reflex in the reaching arm. Unlike adaptation to an external force, our method induces a perturbing reflex within the motor system itself, i.e., perturbing forces are self-generated. This novel method allows a test of the theory that error source information is used to generate an optimal adaptive response. If the self-generated source of the visually induced reflex perturbation is identified, the optimal response will be via reflex gain control. If the source is not identified, a compensatory force should be generated to counteract the reflex. Gain control is the optimal response to reflex perturbation, both because energy cost and movement errors are minimized. Energy is conserved because neither reflex-induced nor compensatory forces are generated. Precision is maximized because endpoint variance is proportional to force production. We find evidence against source-identified adaptation in both experiments, suggesting that sensory-motor information processing is not always optimal.

  3. A theory of human error

    Science.gov (United States)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1981-01-01

    Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  4. Using Generalizability Theory to Disattenuate Correlation Coefficients for Multiple Sources of Measurement Error.

    Science.gov (United States)

    Vispoel, Walter P; Morris, Carrie A; Kilinc, Murat

    2018-05-02

    Over the years, research in the social sciences has been dominated by reporting of reliability coefficients that fail to account for key sources of measurement error. Use of these coefficients, in turn, to correct for measurement error can hinder scientific progress by misrepresenting true relationships among the underlying constructs being investigated. In the research reported here, we addressed these issues using generalizability theory (G-theory) in both traditional and new ways to account for the three key sources of measurement error (random-response, specific-factor, and transient) that affect scores from objectively scored measures. Results from 20 widely used measures of personality, self-concept, and socially desirable responding showed that conventional indices consistently misrepresented reliability and relationships among psychological constructs by failing to account for key sources of measurement error and correlated transient errors within occasions. The results further revealed that G-theory served as an effective framework for remedying these problems. We discuss possible extensions in future research and provide code from the computer package R in an online supplement to enable readers to apply the procedures we demonstrate to their own research.

  5. Source position error influence on industry CT image quality

    International Nuclear Information System (INIS)

    Cong Peng; Li Zhipeng; Wu Haifeng

    2004-01-01

    Based on the emulational exercise, the influence of source position error on industry CT (ICT) image quality was studied and the valuable parameters were obtained for the design of ICT. The vivid container CT image was also acquired from the CT testing system. (authors)

  6. Utilising identifier error variation in linkage of large administrative data sources

    Directory of Open Access Journals (Sweden)

    Katie Harron

    2017-02-01

    Full Text Available Abstract Background Linkage of administrative data sources often relies on probabilistic methods using a set of common identifiers (e.g. sex, date of birth, postcode. Variation in data quality on an individual or organisational level (e.g. by hospital can result in clustering of identifier errors, violating the assumption of independence between identifiers required for traditional probabilistic match weight estimation. This potentially introduces selection bias to the resulting linked dataset. We aimed to measure variation in identifier error rates in a large English administrative data source (Hospital Episode Statistics; HES and to incorporate this information into match weight calculation. Methods We used 30,000 randomly selected HES hospital admissions records of patients aged 0–1, 5–6 and 18–19 years, for 2011/2012, linked via NHS number with data from the Personal Demographic Service (PDS; our gold-standard. We calculated identifier error rates for sex, date of birth and postcode and used multi-level logistic regression to investigate associations with individual-level attributes (age, ethnicity, and gender and organisational variation. We then derived: i weights incorporating dependence between identifiers; ii attribute-specific weights (varying by age, ethnicity and gender; and iii organisation-specific weights (by hospital. Results were compared with traditional match weights using a simulation study. Results Identifier errors (where values disagreed in linked HES-PDS records or missing values were found in 0.11% of records for sex and date of birth and in 53% of records for postcode. Identifier error rates differed significantly by age, ethnicity and sex (p < 0.0005. Errors were less frequent in males, in 5–6 year olds and 18–19 year olds compared with infants, and were lowest for the Asian ethic group. A simulation study demonstrated that substantial bias was introduced into estimated readmission rates in the presence

  7. SU-E-T-789: Validation of 3DVH Accuracy On Quantifying Delivery Errors Based On Clinical Relevant DVH Metrics

    International Nuclear Information System (INIS)

    Ma, T; Kumaraswamy, L

    2015-01-01

    Purpose: Detection of treatment delivery errors is important in radiation therapy. However, accurate quantification of delivery errors is also of great importance. This study aims to evaluate the 3DVH software’s ability to accurately quantify delivery errors. Methods: Three VMAT plans (prostate, H&N and brain) were randomly chosen for this study. First, we evaluated whether delivery errors could be detected by gamma evaluation. Conventional per-beam IMRT QA was performed with the ArcCHECK diode detector for the original plans and for the following modified plans: (1) induced dose difference error up to ±4.0% and (2) control point (CP) deletion (3 to 10 CPs were deleted) (3) gantry angle shift error (3 degree uniformly shift). 2D and 3D gamma evaluation were performed for all plans through SNC Patient and 3DVH, respectively. Subsequently, we investigated the accuracy of 3DVH analysis for all cases. This part evaluated, using the Eclipse TPS plans as standard, whether 3DVH accurately can model the changes in clinically relevant metrics caused by the delivery errors. Results: 2D evaluation seemed to be more sensitive to delivery errors. The average differences between ECLIPSE predicted and 3DVH results for each pair of specific DVH constraints were within 2% for all three types of error-induced treatment plans, illustrating the fact that 3DVH is fairly accurate in quantifying the delivery errors. Another interesting observation was that even though the gamma pass rates for the error plans are high, the DVHs showed significant differences between original plan and error-induced plans in both Eclipse and 3DVH analysis. Conclusion: The 3DVH software is shown to accurately quantify the error in delivered dose based on clinically relevant DVH metrics, where a conventional gamma based pre-treatment QA might not necessarily detect

  8. Per-beam, planar IMRT QA passing rates do not predict clinically relevant patient dose errors

    Energy Technology Data Exchange (ETDEWEB)

    Nelms, Benjamin E.; Zhen Heming; Tome, Wolfgang A. [Canis Lupus LLC and Department of Human Oncology, University of Wisconsin, Merrimac, Wisconsin 53561 (United States); Department of Medical Physics, University of Wisconsin, Madison, Wisconsin 53705 (United States); Departments of Human Oncology, Medical Physics, and Biomedical Engineering, University of Wisconsin, Madison, Wisconsin 53792 (United States)

    2011-02-15

    Purpose: The purpose of this work is to determine the statistical correlation between per-beam, planar IMRT QA passing rates and several clinically relevant, anatomy-based dose errors for per-patient IMRT QA. The intent is to assess the predictive power of a common conventional IMRT QA performance metric, the Gamma passing rate per beam. Methods: Ninety-six unique data sets were created by inducing four types of dose errors in 24 clinical head and neck IMRT plans, each planned with 6 MV Varian 120-leaf MLC linear accelerators using a commercial treatment planning system and step-and-shoot delivery. The error-free beams/plans were used as ''simulated measurements'' (for generating the IMRT QA dose planes and the anatomy dose metrics) to compare to the corresponding data calculated by the error-induced plans. The degree of the induced errors was tuned to mimic IMRT QA passing rates that are commonly achieved using conventional methods. Results: Analysis of clinical metrics (parotid mean doses, spinal cord max and D1cc, CTV D95, and larynx mean) vs IMRT QA Gamma analysis (3%/3 mm, 2/2, 1/1) showed that in all cases, there were only weak to moderate correlations (range of Pearson's r-values: -0.295 to 0.653). Moreover, the moderate correlations actually had positive Pearson's r-values (i.e., clinically relevant metric differences increased with increasing IMRT QA passing rate), indicating that some of the largest anatomy-based dose differences occurred in the cases of high IMRT QA passing rates, which may be called ''false negatives.'' The results also show numerous instances of false positives or cases where low IMRT QA passing rates do not imply large errors in anatomy dose metrics. In none of the cases was there correlation consistent with high predictive power of planar IMRT passing rates, i.e., in none of the cases did high IMRT QA Gamma passing rates predict low errors in anatomy dose metrics or vice versa

  9. Safety analysis methodology with assessment of the impact of the prediction errors of relevant parameters

    International Nuclear Information System (INIS)

    Galia, A.V.

    2011-01-01

    The best estimate plus uncertainty approach (BEAU) requires the use of extensive resources and therefore it is usually applied for cases in which the available safety margin obtained with a conservative methodology can be questioned. Outside the BEAU methodology, there is not a clear approach on how to deal with the issue of considering the uncertainties resulting from prediction errors in the safety analyses performed for licensing submissions. However, the regulatory document RD-310 mentions that the analysis method shall account for uncertainties in the analysis data and models. A possible approach is presented, that is simple and reasonable, representing just the author's views, to take into account the impact of prediction errors and other uncertainties when performing safety analysis in line with regulatory requirements. The approach proposes taking into account the prediction error of relevant parameters. Relevant parameters would be those plant parameters that are surveyed and are used to initiate the action of a mitigating system or those that are representative of the most challenging phenomena for the integrity of a fission barrier. Examples of the application of the methodology are presented involving a comparison between the results with the new approach and a best estimate calculation during the blowdown phase for two small breaks in a generic CANDU 6 station. The calculations are performed with the CATHENA computer code. (author)

  10. Error Analysis of CM Data Products Sources of Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Hunt, Brian D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eckert-Gallup, Aubrey Celia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Cochran, Lainy Dromgoole [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kraus, Terrence D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Allen, Mark B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Beal, Bill [National Security Technologies, Joint Base Andrews, MD (United States); Okada, Colin [National Security Technologies, LLC. (NSTec), Las Vegas, NV (United States); Simpson, Mathew [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-02-01

    This goal of this project is to address the current inability to assess the overall error and uncertainty of data products developed and distributed by DOE’s Consequence Management (CM) Program. This is a widely recognized shortfall, the resolution of which would provide a great deal of value and defensibility to the analysis results, data products, and the decision making process that follows this work. A global approach to this problem is necessary because multiple sources of error and uncertainty contribute to the ultimate production of CM data products. Therefore, this project will require collaboration with subject matter experts across a wide range of FRMAC skill sets in order to quantify the types of uncertainty that each area of the CM process might contain and to understand how variations in these uncertainty sources contribute to the aggregated uncertainty present in CM data products. The ultimate goal of this project is to quantify the confidence level of CM products to ensure that appropriate public and worker protections decisions are supported by defensible analysis.

  11. International Test Comparisons: Reviewing Translation Error in Different Source Language-Target Language Combinations

    Science.gov (United States)

    Zhao, Xueyu; Solano-Flores, Guillermo; Qian, Ming

    2018-01-01

    This article addresses test translation review in international test comparisons. We investigated the applicability of the theory of test translation error--a theory of the multidimensionality and inevitability of test translation error--across source language-target language combinations in the translation of PISA (Programme of International…

  12. The accuracy of webcams in 2D motion analysis: sources of error and their control

    International Nuclear Information System (INIS)

    Page, A; Candelas, P; Belmar, F; Moreno, R

    2008-01-01

    In this paper, we show the potential of webcams as precision measuring instruments in a physics laboratory. Various sources of error appearing in 2D coordinate measurements using low-cost commercial webcams are discussed, quantifying their impact on accuracy and precision, and simple procedures to control these sources of error are presented. Finally, an experiment with controlled movement is performed to experimentally measure the errors described above and to assess the effectiveness of the proposed corrective measures. It will be shown that when these aspects are considered, it is possible to obtain errors lower than 0.1%. This level of accuracy demonstrates that webcams should be considered as very precise and accurate measuring instruments at a remarkably low cost

  13. The accuracy of webcams in 2D motion analysis: sources of error and their control

    Energy Technology Data Exchange (ETDEWEB)

    Page, A; Candelas, P; Belmar, F [Departamento de Fisica Aplicada, Universidad Politecnica de Valencia, Valencia (Spain); Moreno, R [Instituto de Biomecanica de Valencia, Valencia (Spain)], E-mail: alvaro.page@ibv.upv.es

    2008-07-15

    In this paper, we show the potential of webcams as precision measuring instruments in a physics laboratory. Various sources of error appearing in 2D coordinate measurements using low-cost commercial webcams are discussed, quantifying their impact on accuracy and precision, and simple procedures to control these sources of error are presented. Finally, an experiment with controlled movement is performed to experimentally measure the errors described above and to assess the effectiveness of the proposed corrective measures. It will be shown that when these aspects are considered, it is possible to obtain errors lower than 0.1%. This level of accuracy demonstrates that webcams should be considered as very precise and accurate measuring instruments at a remarkably low cost.

  14. Measurement-device-independent quantum key distribution with correlated source-light-intensity errors

    Science.gov (United States)

    Jiang, Cong; Yu, Zong-Wen; Wang, Xiang-Bin

    2018-04-01

    We present an analysis for measurement-device-independent quantum key distribution with correlated source-light-intensity errors. Numerical results show that the results here can greatly improve the key rate especially with large intensity fluctuations and channel attenuation compared with prior results if the intensity fluctuations of different sources are correlated.

  15. Structural Model Error and Decision Relevancy

    Science.gov (United States)

    Goldsby, M.; Lusk, G.

    2017-12-01

    The extent to which climate models can underwrite specific climate policies has long been a contentious issue. Skeptics frequently deny that climate models are trustworthy in an attempt to undermine climate action, whereas policy makers often desire information that exceeds the capabilities of extant models. While not skeptics, a group of mathematicians and philosophers [Frigg et al. (2014)] recently argued that even tiny differences between the structure of a complex dynamical model and its target system can lead to dramatic predictive errors, possibly resulting in disastrous consequences when policy decisions are based upon those predictions. They call this result the Hawkmoth effect (HME), and seemingly use it to rebuke rightwing proposals to forgo mitigation in favor of adaptation. However, a vigorous debate has emerged between Frigg et al. on one side and another philosopher-mathematician pair [Winsberg and Goodwin (2016)] on the other. On one hand, Frigg et al. argue that their result shifts the burden to climate scientists to demonstrate that their models do not fall prey to the HME. On the other hand, Winsberg and Goodwin suggest that arguments like those asserted by Frigg et al. can be, if taken seriously, "dangerous": they fail to consider the variety of purposes for which models can be used, and thus too hastily undermine large swaths of climate science. They put the burden back on Frigg et al. to show their result has any effect on climate science. This paper seeks to attenuate this debate by establishing an irenic middle position; we find that there is more agreement between sides than it first seems. We distinguish a `decision standard' from a `burden of proof', which helps clarify the contributions to the debate from both sides. In making this distinction, we argue that scientists bear the burden of assessing the consequences of HME, but that the standard Frigg et al. adopt for decision relevancy is too strict.

  16. Partial and specific source memory for faces associated to other- and self-relevant negative contexts.

    Science.gov (United States)

    Bell, Raoul; Giang, Trang; Buchner, Axel

    2012-01-01

    Previous research has shown a source memory advantage for faces presented in negative contexts. As yet it remains unclear whether participants remember the specific type of context in which the faces were presented or whether they can only remember that the face was associated with negative valence. In the present study, participants saw faces together with descriptions of two different types of negative behaviour and neutral behaviour. In Experiment 1, we examined whether the participants were able to discriminate between two types of other-relevant negative context information (cheating and disgusting behaviour) in a source memory test. In Experiment 2, we assessed source memory for other-relevant negative (threatening) context information (other-aggressive behaviour) and self-relevant negative context information (self-aggressive behaviour). A multinomial source memory model was used to separately assess partial source memory for the negative valence of the behaviour and specific source memory for the particular type of negative context the face was associated with. In Experiment 1, source memory was specific for the particular type of negative context presented (i.e., cheating or disgusting behaviour). Experiment 2 showed that source memory for other-relevant negative information was more specific than source memory for self-relevant information. Thus, emotional source memory may vary in specificity depending on the degree to which the negative emotional context is perceived as threatening.

  17. Identifying afterloading PDR and HDR brachytherapy errors using real-time fiber-coupled Al2O3:C dosimetry and a novel statistical error decision criterion

    International Nuclear Information System (INIS)

    Kertzscher, Gustavo; Andersen, Claus E.; Siebert, Frank-Andre; Nielsen, Soren Kynde; Lindegaard, Jacob C.; Tanderup, Kari

    2011-01-01

    Background and purpose: The feasibility of a real-time in vivo dosimeter to detect errors has previously been demonstrated. The purpose of this study was to: (1) quantify the sensitivity of the dosimeter to detect imposed treatment errors under well controlled and clinically relevant experimental conditions, and (2) test a new statistical error decision concept based on full uncertainty analysis. Materials and methods: Phantom studies of two gynecological cancer PDR and one prostate cancer HDR patient treatment plans were performed using tandem ring applicators or interstitial needles. Imposed treatment errors, including interchanged pairs of afterloader guide tubes and 2-20 mm source displacements, were monitored using a real-time fiber-coupled carbon doped aluminum oxide (Al 2 O 3 :C) crystal dosimeter that was positioned in the reconstructed tumor region. The error detection capacity was evaluated at three dose levels: dwell position, source channel, and fraction. The error criterion incorporated the correlated source position uncertainties and other sources of uncertainty, and it was applied both for the specific phantom patient plans and for a general case (source-detector distance 5-90 mm and position uncertainty 1-4 mm). Results: Out of 20 interchanged guide tube errors, time-resolved analysis identified 17 while fraction level analysis identified two. Channel and fraction level comparisons could leave 10 mm dosimeter displacement errors unidentified. Dwell position dose rate comparisons correctly identified displacements ≥5 mm. Conclusion: This phantom study demonstrates that Al 2 O 3 :C real-time dosimetry can identify applicator displacements ≥5 mm and interchanged guide tube errors during PDR and HDR brachytherapy. The study demonstrates the shortcoming of a constant error criterion and the advantage of a statistical error criterion.

  18. Impact and quantification of the sources of error in DNA pooling designs.

    Science.gov (United States)

    Jawaid, A; Sham, P

    2009-01-01

    The analysis of genome wide variation offers the possibility of unravelling the genes involved in the pathogenesis of disease. Genome wide association studies are also particularly useful for identifying and validating targets for therapeutic intervention as well as for detecting markers for drug efficacy and side effects. The cost of such large-scale genetic association studies may be reduced substantially by the analysis of pooled DNA from multiple individuals. However, experimental errors inherent in pooling studies lead to a potential increase in the false positive rate and a loss in power compared to individual genotyping. Here we quantify various sources of experimental error using empirical data from typical pooling experiments and corresponding individual genotyping counts using two statistical methods. We provide analytical formulas for calculating these different errors in the absence of complete information, such as replicate pool formation, and for adjusting for the errors in the statistical analysis. We demonstrate that DNA pooling has the potential of estimating allele frequencies accurately, and adjusting the pooled allele frequency estimates for differential allelic amplification considerably improves accuracy. Estimates of the components of error show that differential allelic amplification is the most important contributor to the error variance in absolute allele frequency estimation, followed by allele frequency measurement and pool formation errors. Our results emphasise the importance of minimising experimental errors and obtaining correct error estimates in genetic association studies.

  19. Effects of errors on the dynamic aperture of the Advanced Photon Source storage ring

    International Nuclear Information System (INIS)

    Bizek, H.; Crosbie, E.; Lessner, E.; Teng, L.; Wirsbinski, J.

    1991-01-01

    The individual tolerance limits for alignment errors and magnet fabrication errors in the 7-GeV Advanced Photon Source storage ring are determined by computer-simulated tracking. Limits are established for dipole strength and roll errors, quadrupole strength and alignment errors, sextupole strength and alignment errors, as well as higher order multipole strengths in dipole and quadrupole magnets. The effects of girder misalignments on the dynamic aperture are also studied. Computer simulations are obtained with the tracking program RACETRACK, with errors introduced from a user-defined Gaussian distribution, truncated at ±5 standard deviation units. For each error, the average and rms spread of the stable amplitudes are determined for ten distinct machines, defined as ten different seeds to the random distribution, and for five distinct initial directions of the tracking particle. 4 refs., 4 figs., 1 tab

  20. Parameter optimization in biased decoy-state quantum key distribution with both source errors and statistical fluctuations

    Science.gov (United States)

    Zhu, Jian-Rong; Li, Jian; Zhang, Chun-Mei; Wang, Qin

    2017-10-01

    The decoy-state method has been widely used in commercial quantum key distribution (QKD) systems. In view of the practical decoy-state QKD with both source errors and statistical fluctuations, we propose a universal model of full parameter optimization in biased decoy-state QKD with phase-randomized sources. Besides, we adopt this model to carry out simulations of two widely used sources: weak coherent source (WCS) and heralded single-photon source (HSPS). Results show that full parameter optimization can significantly improve not only the secure transmission distance but also the final key generation rate. And when taking source errors and statistical fluctuations into account, the performance of decoy-state QKD using HSPS suffered less than that of decoy-state QKD using WCS.

  1. Sensitivity of Magnetospheric Multi-Scale (MMS) Mission Navigation Accuracy to Major Error Sources

    Science.gov (United States)

    Olson, Corwin; Long, Anne; Car[emter. Russell

    2011-01-01

    The Magnetospheric Multiscale (MMS) mission consists of four satellites flying in formation in highly elliptical orbits about the Earth, with a primary objective of studying magnetic reconnection. The baseline navigation concept is independent estimation of each spacecraft state using GPS pseudorange measurements referenced to an Ultra Stable Oscillator (USO) with accelerometer measurements included during maneuvers. MMS state estimation is performed onboard each spacecraft using the Goddard Enhanced Onboard Navigation System (GEONS), which is embedded in the Navigator GPS receiver. This paper describes the sensitivity of MMS navigation performance to two major error sources: USO clock errors and thrust acceleration knowledge errors.

  2. Sensitivity of Magnetospheric Multi-Scale (MMS) Mission Naviation Accuracy to Major Error Sources

    Science.gov (United States)

    Olson, Corwin; Long, Anne; Carpenter, J. Russell

    2011-01-01

    The Magnetospheric Multiscale (MMS) mission consists of four satellites flying in formation in highly elliptical orbits about the Earth, with a primary objective of studying magnetic reconnection. The baseline navigation concept is independent estimation of each spacecraft state using GPS pseudorange measurements referenced to an Ultra Stable Oscillator (USO) with accelerometer measurements included during maneuvers. MMS state estimation is performed onboard each spacecraft using the Goddard Enhanced Onboard Navigation System (GEONS), which is embedded in the Navigator GPS receiver. This paper describes the sensitivity of MMS navigation performance to two major error sources: USO clock errors and thrust acceleration knowledge errors.

  3. Study of principle error sources in gamma spectrometry. Application to cross sections measurement

    International Nuclear Information System (INIS)

    Majah, M. Ibn.

    1985-01-01

    The principle error sources in gamma spectrometry have been studied in purpose to measure cross sections with great precision. Three error sources have been studied: dead time and pile up which depend on counting rate, and coincidence effect that depends on the disintegration scheme of the radionuclide in question. A constant frequency pulse generator has been used to correct the counting loss due to dead time and pile up in cases of long and short disintegration periods. The loss due to coincidence effect can reach 25% and over, depending on the disintegration scheme and on the distance source-detector. After establishing the correction formula and verifying its validity for four examples: iron 56, scandium 48, antimony 120 and gold 196 m, an application has been done by measuring cross sections of nuclear reactions that lead to long disintegration periods which need short distance source-detector counting and thus correcting the loss due to dead time effect, pile up and coincidence effect. 16 refs., 45 figs., 25 tabs. (author)

  4. Longitudinal Cut Method Revisited: A Survey on the Main Error Sources

    OpenAIRE

    Moriconi, Alessandro; Lalli, Francesco; Di Felice, Fabio; Esposito, Pier Giorgio; Piscopia, Rodolfo

    2000-01-01

    Some of the main error sources in wave pattern resistance determination were investigated. The experimental data obtained at the Italian Ship Model Basin (longitudinal wave cuts concerned with the steady motion of the Series 60 model and a hard-chine catamaran) were analyzed. It was found that, within the range of Froude numbers tested (0.225 ≤ Fr ≤ 0.345 for the Series 60 and 0.5 ≤ Fr ≤ 1 for the catamaran) two sources of uncertainty play a significant role: (i) the p...

  5. the effect of current and relevant information sources on the use

    African Journals Online (AJOL)

    Admin

    reported similar findings at Yaba College of. Technology, Lagos. However, in a ... values. In other words, current information sources resulted in the use of the library. Jam (1992) identified lack of relevant information sources to be one of the problems facing library users and has ... Bachelor's degree holders. That those with.

  6. Rotational patient setup errors in IGRT with XVI system in Elekta Synergy and their clinical relevance

    International Nuclear Information System (INIS)

    Madhusudhana Sresty, N.V.N.; Muralidhar, K.R.; Raju, A.K.; Sha, R.L.; Ramanjappa

    2008-01-01

    The goal of Image Guided Radiotherapy (IGRT) is to improve the accuracy of treatment delivery. In this technique, it is possible to get volumetric images of patient anatomy before delivery of treatment.XVI( release 3.5) system in Elekta Synergy linear accelerator (Elekta,Crawley,UK) has the potential to ensure that, the relative positions of the target volume is same as in the treatment plan. It involves acquiring planar images produced by a kilo Voltage cone beam rotating about the patient in the treatment position. After 3 dimensional match between reference and localization images, the system gives rotational errors also along with translational shifts. One can easily perform translational shifts with treatment couch. But rotational shifts cannot be performed. Most of the studies dealt with translational shifts only. Few studies reported regarding rotational errors. It is found that in the treatment of elongated targets, even small rotational errors can show difference in results. The main objectives of this study is 1) To verify the magnitude of rotational errors in different clinical sites observed and to compare with the other reports. 2) To find its clinical relevance 3) To find difference in rotational shift results with improper selection of kV collimator

  7. Sources of Error in Satellite Navigation Positioning

    Directory of Open Access Journals (Sweden)

    Jacek Januszewski

    2017-09-01

    Full Text Available An uninterrupted information about the user’s position can be obtained generally from satellite navigation system (SNS. At the time of this writing (January 2017 currently two global SNSs, GPS and GLONASS, are fully operational, two next, also global, Galileo and BeiDou are under construction. In each SNS the accuracy of the user’s position is affected by the three main factors: accuracy of each satellite position, accuracy of pseudorange measurement and satellite geometry. The user’s position error is a function of both the pseudorange error called UERE (User Equivalent Range Error and user/satellite geometry expressed by right Dilution Of Precision (DOP coefficient. This error is decomposed into two types of errors: the signal in space ranging error called URE (User Range Error and the user equipment error UEE. The detailed analyses of URE, UEE, UERE and DOP coefficients, and the changes of DOP coefficients in different days are presented in this paper.

  8. A methodology for translating positional error into measures of attribute error, and combining the two error sources

    Science.gov (United States)

    Yohay Carmel; Curtis Flather; Denis Dean

    2006-01-01

    This paper summarizes our efforts to investigate the nature, behavior, and implications of positional error and attribute error in spatiotemporal datasets. Estimating the combined influence of these errors on map analysis has been hindered by the fact that these two error types are traditionally expressed in different units (distance units, and categorical units,...

  9. Temporal dynamics of conflict monitoring and the effects of one or two conflict sources on error-(related) negativity.

    Science.gov (United States)

    Armbrecht, Anne-Simone; Wöhrmann, Anne; Gibbons, Henning; Stahl, Jutta

    2010-09-01

    The present electrophysiological study investigated the temporal development of response conflict and the effects of diverging conflict sources on error(-related) negativity (Ne). Eighteen participants performed a combined stop-signal flanker task, which was comprised of two different conflict sources: a left-right and a go-stop response conflict. It is assumed that the Ne reflects the activity of a conflict monitoring system and thus increases according to (i) the number of conflict sources and (ii) the temporal development of the conflict activity. No increase of the Ne amplitude after double errors (comprising two conflict sources) as compared to hand- and stop-errors (comprising one conflict source) was found, whereas a higher Ne amplitude was observed after a delayed stop-signal onset. The results suggest that the Ne is not sensitive to an increase in the number of conflict sources, but to the temporal dynamics of a go-stop response conflict. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  10. Decoy-state quantum key distribution with both source errors and statistical fluctuations

    International Nuclear Information System (INIS)

    Wang Xiangbin; Yang Lin; Peng Chengzhi; Pan Jianwei

    2009-01-01

    We show how to calculate the fraction of single-photon counts of the 3-intensity decoy-state quantum cryptography faithfully with both statistical fluctuations and source errors. Our results rely only on the bound values of a few parameters of the states of pulses.

  11. Accuracy and Sources of Error for an Angle Independent Volume Flow Estimator

    DEFF Research Database (Denmark)

    Jensen, Jonas; Olesen, Jacob Bjerring; Hansen, Peter Møller

    2014-01-01

    This paper investigates sources of error for a vector velocity volume flow estimator. Quantification of the estima tor’s accuracy is performed theoretically and investigated in vivo . Womersley’s model for pulsatile flow is used to simulate velo city profiles and calculate volume flow errors....... A BK Medical UltraView 800 ultrasound scanner with a 9 MHz linear array transducer is used to obtain Vector Flow Imaging sequences of a superficial part of the fistulas. Cross-sectional diameters of each fistu la are measured on B-mode images by rotating the scan plane 90 degrees. The major axis...

  12. Improvement of spatial discretization error on the semi-analytic nodal method using the scattered source subtraction method

    International Nuclear Information System (INIS)

    Yamamoto, Akio; Tatsumi, Masahiro

    2006-01-01

    In this paper, the scattered source subtraction (SSS) method is newly proposed to improve the spatial discretization error of the semi-analytic nodal method with the flat-source approximation. In the SSS method, the scattered source is subtracted from both side of the diffusion or the transport equation to make spatial variation of the source term to be small. The same neutron balance equation is still used in the SSS method. Since the SSS method just modifies coefficients of node coupling equations (those used in evaluation for the response of partial currents), its implementation is easy. Validity of the present method is verified through test calculations that are carried out in PWR multi-assemblies configurations. The calculation results show that the SSS method can significantly improve the spatial discretization error. Since the SSS method does not have any negative impact on execution time, convergence behavior and memory requirement, it will be useful to reduce the spatial discretization error of the semi-analytic nodal method with the flat-source approximation. (author)

  13. Unaccounted source of systematic errors in measurements of the Newtonian gravitational constant G

    Science.gov (United States)

    DeSalvo, Riccardo

    2015-06-01

    Many precision measurements of G have produced a spread of results incompatible with measurement errors. Clearly an unknown source of systematic errors is at work. It is proposed here that most of the discrepancies derive from subtle deviations from Hooke's law, caused by avalanches of entangled dislocations. The idea is supported by deviations from linearity reported by experimenters measuring G, similarly to what is observed, on a larger scale, in low-frequency spring oscillators. Some mitigating experimental apparatus modifications are suggested.

  14. Explicit control of image noise and error properties in cone-beam microtomography using dual concentric circular source loci

    International Nuclear Information System (INIS)

    Davis, Graham

    2005-01-01

    Cone-beam reconstruction from projections with a circular source locus (relative to the specimen) is commonly used in X-ray microtomography systems. Although this method does not provide an 'exact' reconstruction, since there is insufficient data in the projections, the approximation is considered adequate for many purposes. However, some specimens, with sharp changes in X-ray attenuation in the direction of the rotation axis, are particularly prone to cone-beam-related errors. These errors can be reduced by increasing the source-to-specimen distance, but at the expense of reduced signal-to-noise ratio or increased scanning time. An alternative method, based on heuristic arguments, is to scan the specimen with both short and long source-to-specimen distances and combine high frequency components from the former reconstruction with low frequency ones from the latter. This composite reconstruction has the low noise characteristics of the short source-to-specimen reconstruction and the low cone-beam errors of the long one. This has been tested with simulated data representing a particularly error prone specimen

  15. Error-source effects on the performance of direct and iterative algorithms on an optical matrix-vector processor

    Science.gov (United States)

    Perlee, Caroline J.; Casasent, David P.

    1990-09-01

    Error sources in an optical matrix-vector processor are analyzed in terms of their effect on the performance of the algorithms used to solve a set of nonlinear and linear algebraic equations. A direct and an iterative algorithm are used to solve a nonlinear time-dependent case-study from computational fluid dynamics. A simulator which emulates the data flow and number representation of the OLAP is used to studs? these error effects. The ability of each algorithm to tolerate or correct the error sources is quantified. These results are extended to the general case of solving nonlinear and linear algebraic equations on the optical system.

  16. Unaccounted source of systematic errors in measurements of the Newtonian gravitational constant G

    International Nuclear Information System (INIS)

    DeSalvo, Riccardo

    2015-01-01

    Many precision measurements of G have produced a spread of results incompatible with measurement errors. Clearly an unknown source of systematic errors is at work. It is proposed here that most of the discrepancies derive from subtle deviations from Hooke's law, caused by avalanches of entangled dislocations. The idea is supported by deviations from linearity reported by experimenters measuring G, similarly to what is observed, on a larger scale, in low-frequency spring oscillators. Some mitigating experimental apparatus modifications are suggested. - Highlights: • Source of discrepancies on universal gravitational constant G measurements. • Collective motion of dislocations results in breakdown of Hook's law. • Self-organized criticality produce non-predictive shifts of equilibrium point. • New dissipation mechanism different from loss angle and viscous models is necessary. • Mitigation measures proposed may bring coherence to the measurements of G

  17. Medication Errors: New EU Good Practice Guide on Risk Minimisation and Error Prevention.

    Science.gov (United States)

    Goedecke, Thomas; Ord, Kathryn; Newbould, Victoria; Brosch, Sabine; Arlett, Peter

    2016-06-01

    A medication error is an unintended failure in the drug treatment process that leads to, or has the potential to lead to, harm to the patient. Reducing the risk of medication errors is a shared responsibility between patients, healthcare professionals, regulators and the pharmaceutical industry at all levels of healthcare delivery. In 2015, the EU regulatory network released a two-part good practice guide on medication errors to support both the pharmaceutical industry and regulators in the implementation of the changes introduced with the EU pharmacovigilance legislation. These changes included a modification of the 'adverse reaction' definition to include events associated with medication errors, and the requirement for national competent authorities responsible for pharmacovigilance in EU Member States to collaborate and exchange information on medication errors resulting in harm with national patient safety organisations. To facilitate reporting and learning from medication errors, a clear distinction has been made in the guidance between medication errors resulting in adverse reactions, medication errors without harm, intercepted medication errors and potential errors. This distinction is supported by an enhanced MedDRA(®) terminology that allows for coding all stages of the medication use process where the error occurred in addition to any clinical consequences. To better understand the causes and contributing factors, individual case safety reports involving an error should be followed-up with the primary reporter to gather information relevant for the conduct of root cause analysis where this may be appropriate. Such reports should also be summarised in periodic safety update reports and addressed in risk management plans. Any risk minimisation and prevention strategy for medication errors should consider all stages of a medicinal product's life-cycle, particularly the main sources and types of medication errors during product development. This article

  18. Identification errors in the blood transfusion laboratory: a still relevant issue for patient safety.

    Science.gov (United States)

    Lippi, Giuseppe; Plebani, Mario

    2011-04-01

    Remarkable technological advances and increased awareness have both contributed to decrease substantially the uncertainty of the analytical phase, so that the manually intensive preanalytical activities currently represent the leading sources of errors in laboratory and transfusion medicine. Among preanalytical errors, misidentification and mistransfusion are still regarded as a considerable problem, posing serious risks for patient health and carrying huge expenses for the healthcare system. As such, a reliable policy of risk management should be readily implemented, developing through a multifaceted approach to prevent or limit the adverse outcomes related to transfusion reactions from blood incompatibility. This strategy encompasses root cause analysis, compliance with accreditation requirements, strict adherence to standard operating procedures, guidelines and recommendations for specimen collection, use of positive identification devices, rejection of potentially misidentified specimens, informatics data entry, query host communication, automated systems for patient identification and sample labeling and an adequate and safe environment. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Source memory errors in schizophrenia, hallucinations and negative symptoms: a synthesis of research findings.

    Science.gov (United States)

    Brébion, G; Ohlsen, R I; Bressan, R A; David, A S

    2012-12-01

    Previous research has shown associations between source memory errors and hallucinations in patients with schizophrenia. We bring together here findings from a broad memory investigation to specify better the type of source memory failure that is associated with auditory and visual hallucinations. Forty-one patients with schizophrenia and 43 healthy participants underwent a memory task involving recall and recognition of lists of words, recognition of pictures, memory for temporal and spatial context of presentation of the stimuli, and remembering whether target items were presented as words or pictures. False recognition of words and pictures was associated with hallucination scores. The extra-list intrusions in free recall were associated with verbal hallucinations whereas the intra-list intrusions were associated with a global hallucination score. Errors in discriminating the temporal context of word presentation and the spatial context of picture presentation were associated with auditory hallucinations. The tendency to remember verbal labels of items as pictures of these items was associated with visual hallucinations. Several memory errors were also inversely associated with affective flattening and anhedonia. Verbal and visual hallucinations are associated with confusion between internal verbal thoughts or internal visual images and perception. In addition, auditory hallucinations are associated with failure to process or remember the context of presentation of the events. Certain negative symptoms have an opposite effect on memory errors.

  20. Estimation of distance error by fuzzy set theory required for strength determination of HDR (192)Ir brachytherapy sources.

    Science.gov (United States)

    Kumar, Sudhir; Datta, D; Sharma, S D; Chourasiya, G; Babu, D A R; Sharma, D N

    2014-04-01

    Verification of the strength of high dose rate (HDR) (192)Ir brachytherapy sources on receipt from the vendor is an important component of institutional quality assurance program. Either reference air-kerma rate (RAKR) or air-kerma strength (AKS) is the recommended quantity to specify the strength of gamma-emitting brachytherapy sources. The use of Farmer-type cylindrical ionization chamber of sensitive volume 0.6 cm(3) is one of the recommended methods for measuring RAKR of HDR (192)Ir brachytherapy sources. While using the cylindrical chamber method, it is required to determine the positioning error of the ionization chamber with respect to the source which is called the distance error. An attempt has been made to apply the fuzzy set theory to estimate the subjective uncertainty associated with the distance error. A simplified approach of applying this fuzzy set theory has been proposed in the quantification of uncertainty associated with the distance error. In order to express the uncertainty in the framework of fuzzy sets, the uncertainty index was estimated and was found to be within 2.5%, which further indicates that the possibility of error in measuring such distance may be of this order. It is observed that the relative distance li estimated by analytical method and fuzzy set theoretic approach are consistent with each other. The crisp values of li estimated using analytical method lie within the bounds computed using fuzzy set theory. This indicates that li values estimated using analytical methods are within 2.5% uncertainty. This value of uncertainty in distance measurement should be incorporated in the uncertainty budget, while estimating the expanded uncertainty in HDR (192)Ir source strength measurement.

  1. Artificial Intelligence and Second Language Learning: An Efficient Approach to Error Remediation

    Science.gov (United States)

    Dodigovic, Marina

    2007-01-01

    While theoretical approaches to error correction vary in the second language acquisition (SLA) literature, most sources agree that such correction is useful and leads to learning. While some point out the relevance of the communicative context in which the correction takes place, others stress the value of consciousness-raising. Trying to…

  2. Methodical assessment of all non-ionizing radiation sources that can provide a relevant contribution to public exposure. Final report

    International Nuclear Information System (INIS)

    Bornkessel, Christian; Schubert, Markus; Wuschek, Matthias; Brueggemeyer, Hauke; Weiskopf, Daniela

    2011-01-01

    The aim of the research project was to systematically identify artificial sources on non-ionizing radiation (electric, magnetic or electromagnetic fields in a frequency range from 0 Hz to 300 GHz, as well optical radiation in a wavelength range from 100 nm to 1 mm), that have relevant contribution to public exposure. The report includes the following chapters: (1) Concept for the relevance assessment for non-ionizing radiation sources; (2) concept for the systematic identification of sources from establishes technologies; (3) concept for the systematic identification of sources from new or foreseeable technologies; (4)overview of relevant radiation sources.

  3. Overview of sources of radioactive particles of Nordic relevance as well as a short description of available particle characterisation techniques

    International Nuclear Information System (INIS)

    Lind, O.C.; Salbu, B.; Nygren, U.; Thaning, L.; Ramebaeck, H.; Sidhu, S.; Roos, P.; Poellaenen, R.; Ranebo, Y.; Holm, E.

    2008-10-01

    The present overview report show that there are many existing and potential sources of radioactive particle contamination of relevance to the Nordic countries. Following their release, radioactive particles represent point sources of short- and long-term radioecological significance, and the failure to recognise their presence may lead to significant errors in the short- and long-term impact assessments related to radioactive contamination at a particular site. Thus, there is a need of knowledge with respect to the probability, quantity and expected impact of radioactive particle formation and release in case of specified potential nuclear events (e.g. reactor accident or nuclear terrorism). Furthermore, knowledge with respect to the particle characteristics influencing transport, ecosystem transfer and biological effects is important. In this respect, it should be noted that an IAEA coordinated research project was running from 2000-2006 (IAEA CRP, 2001) focussing on characterisation and environmental impact of radioactive particles, while a new IAEA CRP focussing on the biological effects of radioactive particles will be launched in 2008. (au)

  4. Overview of sources of radioactive particles of Nordic relevance as well as a short description of available particle characterisation techniques

    Energy Technology Data Exchange (ETDEWEB)

    Lind, O.C.; Salbu, B. (Norwegian Univ. of Life Sciences (Norway)); Nygren, U.; Thaning, L.; Ramebaeck, H. (Swedish Defense Research Agency (FOI) (Sweden)); Sidhu, S. (Inst. for Energy Technology (Norway)); Roos, P. (Technical Univ. of Denmark. Risoe DTU, Roskilde (Denmark)); Poellaenen, R. (STUK (Finland)); Ranebo, Y.; Holm, E. (Univ. Lund (Sweden))

    2008-10-15

    The present overview report show that there are many existing and potential sources of radioactive particle contamination of relevance to the Nordic countries. Following their release, radioactive particles represent point sources of short- and long-term radioecological significance, and the failure to recognise their presence may lead to significant errors in the short- and long-term impact assessments related to radioactive contamination at a particular site. Thus, there is a need of knowledge with respect to the probability, quantity and expected impact of radioactive particle formation and release in case of specified potential nuclear events (e.g. reactor accident or nuclear terrorism). Furthermore, knowledge with respect to the particle characteristics influencing transport, ecosystem transfer and biological effects is important. In this respect, it should be noted that an IAEA coordinated research project was running from 2000-2006 (IAEA CRP, 2001) focussing on characterisation and environmental impact of radioactive particles, while a new IAEA CRP focussing on the biological effects of radioactive particles will be launched in 2008. (author)

  5. Source Memory Errors Associated with Reports of Posttraumatic Flashbacks: A Proof of Concept Study

    Science.gov (United States)

    Brewin, Chris R.; Huntley, Zoe; Whalley, Matthew G.

    2012-01-01

    Flashbacks are involuntary, emotion-laden images experienced by individuals with posttraumatic stress disorder (PTSD). The qualities of flashbacks could under certain circumstances lead to source memory errors. Participants with PTSD wrote a trauma narrative and reported the experience of flashbacks. They were later presented with stimuli from…

  6. Problems of accuracy and sources of error in trace analysis of elements

    International Nuclear Information System (INIS)

    Porat, Ze'ev.

    1995-07-01

    The technological developments in the field of analytical chemistry in recent years facilitates trace analysis of materials in sub-ppb levels. This provides important information regarding the presence of various trace elements in the human body, in drinking water and in the environment. However, it also exposes the measurements to more severe problems of contamination and inaccuracy due to the high sensitivity of the analytical methods. The sources of error are numerous and can be included in three main groups: (a) impurities of various sources; (b) loss of material during sample processing; (c) problems of calibration and interference. These difficulties are discussed here in detail, together with some practical solutions and examples.(authors) 8 figs., 2 tabs., 18 refs.,

  7. Problems of accuracy and sources of error in trace analysis of elements

    Energy Technology Data Exchange (ETDEWEB)

    Porat, Ze` ev

    1995-07-01

    The technological developments in the field of analytical chemistry in recent years facilitates trace analysis of materials in sub-ppb levels. This provides important information regarding the presence of various trace elements in the human body, in drinking water and in the environment. However, it also exposes the measurements to more severe problems of contamination and inaccuracy due to the high sensitivity of the analytical methods. The sources of error are numerous and can be included in three main groups: (a) impurities of various sources; (b) loss of material during sample processing; (c) problems of calibration and interference. These difficulties are discussed here in detail, together with some practical solutions and examples.(authors) 8 figs., 2 tabs., 18 refs.,.

  8. Characterization of identification errors and uses in localization of poor modal correlation

    Science.gov (United States)

    Martin, Guillaume; Balmes, Etienne; Chancelier, Thierry

    2017-05-01

    While modal identification is a mature subject, very few studies address the characterization of errors associated with components of a mode shape. This is particularly important in test/analysis correlation procedures, where the Modal Assurance Criterion is used to pair modes and to localize at which sensors discrepancies occur. Poor correlation is usually attributed to modeling errors, but clearly identification errors also occur. In particular with 3D Scanning Laser Doppler Vibrometer measurement, many transfer functions are measured. As a result individual validation of each measurement cannot be performed manually in a reasonable time frame and a notable fraction of measurements is expected to be fairly noisy leading to poor identification of the associated mode shape components. The paper first addresses measurements and introduces multiple criteria. The error measures the difference between test and synthesized transfer functions around each resonance and can be used to localize poorly identified modal components. For intermediate error values, diagnostic of the origin of the error is needed. The level evaluates the transfer function amplitude in the vicinity of a given mode and can be used to eliminate sensors with low responses. A Noise Over Signal indicator, product of error and level, is then shown to be relevant to detect poorly excited modes and errors due to modal property shifts between test batches. Finally, a contribution is introduced to evaluate the visibility of a mode in each transfer. Using tests on a drum brake component, these indicators are shown to provide relevant insight into the quality of measurements. In a second part, test/analysis correlation is addressed with a focus on the localization of sources of poor mode shape correlation. The MACCo algorithm, which sorts sensors by the impact of their removal on a MAC computation, is shown to be particularly relevant. Combined with the error it avoids keeping erroneous modal components

  9. Soft error evaluation in SRAM using α sources

    International Nuclear Information System (INIS)

    He Chaohui; Chu Jun; Ren Xueming; Xia Chunmei; Yang Xiupei; Zhang Weiwei; Wang Hongquan; Xiao Jiangbo; Li Xiaolin

    2006-01-01

    Soft errors in memories influence directly the reliability of products. To compare the ability of three different memories against soft errors by experiments of alpha particles irradiation, the numbers of soft errors are measured for three different SRAMs and the cross sections of single event upset (SEU) and failures in time (FIT) are calculated. According to the cross sections of SEU, the ability of A166M against soft errors is the best and then B166M, the last B200M. The average FIT of B166M is smaller than that of B200M, and that of A166M is the biggest among them. (authors)

  10. Dye shift: a neglected source of genotyping error in molecular ecology.

    Science.gov (United States)

    Sutton, Jolene T; Robertson, Bruce C; Jamieson, Ian G

    2011-05-01

    Molecular ecologists must be vigilant in detecting and accounting for genotyping error, yet potential errors stemming from dye-induced mobility shift (dye shift) may be frequently neglected and largely unknown to researchers who employ 3-primer systems with automated genotyping. When left uncorrected, dye shift can lead to mis-scoring alleles and even to falsely calling new alleles if different dyes are used to genotype the same locus in subsequent reactions. When we used four different fluorophore labels from a standard dye set to genotype the same set of loci, differences in the resulting size estimates for a single allele ranged from 2.07 bp to 3.68 bp. The strongest effects were associated with the fluorophore PET, and relative degree of dye shift was inversely related to locus size. We found little evidence in the literature that dye shift is regularly accounted for in 3-primer studies, despite knowledge of this phenomenon existing for over a decade. However, we did find some references to erroneous standard correction factors for the same set of dyes that we tested. We thus reiterate the need for strict quality control when attempting to reduce possible sources of genotyping error, and in cases where different dyes are applied to a single locus, perhaps mistakenly, we strongly discourage researchers from assuming generic correction patterns. © 2011 Blackwell Publishing Ltd.

  11. Sources of errors in the determination of fluorine in feeding stuffs

    Energy Technology Data Exchange (ETDEWEB)

    Oelschlaeger, W; Kirchgessner, M

    1960-01-01

    The difference between deficiency and toxicity levels of F in fodder is small; for this reason the many sources of error in the estimation of F contents are discussed. A list, and preventive measures suggested are included. Finally, detailed working instructions are given for accurate F analysis, and representative F contents of certain feeding stuffs are tabulated. A maximal permissible limit for dairy cattle of 2 - 3 mg F per day per kg body weight is suggested. F contents of plants growing near HF-producing plants especially downwind, are often dangerously high.

  12. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Cascade of neural events leading from error commission to subsequent awareness revealed using EEG source imaging.

    Directory of Open Access Journals (Sweden)

    Monica Dhar

    Full Text Available The goal of the present study was to shed light on the respective contributions of three important action monitoring brain regions (i.e. cingulate cortex, insula, and orbitofrontal cortex during the conscious detection of response errors. To this end, fourteen healthy adults performed a speeded Go/Nogo task comprising Nogo trials of varying levels of difficulty, designed to elicit aware and unaware errors. Error awareness was indicated by participants with a second key press after the target key press. Meanwhile, electromyogram (EMG from the response hand was recorded in addition to high-density scalp electroencephalogram (EEG. In the EMG-locked grand averages, aware errors clearly elicited an error-related negativity (ERN reflecting error detection, and a later error positivity (Pe reflecting conscious error awareness. However, no Pe was recorded after unaware errors or hits. These results are in line with previous studies suggesting that error awareness is associated with generation of the Pe. Source localisation results confirmed that the posterior cingulate motor area was the main generator of the ERN. However, inverse solution results also point to the involvement of the left posterior insula during the time interval of the Pe, and hence error awareness. Moreover, consecutive to this insular activity, the right orbitofrontal cortex (OFC was activated in response to aware and unaware errors but not in response to hits, consistent with the implication of this area in the evaluation of the value of an error. These results reveal a precise sequence of activations in these three non-overlapping brain regions following error commission, enabling a progressive differentiation between aware and unaware errors as a function of time elapsed, thanks to the involvement first of interoceptive or proprioceptive processes (left insula, later leading to the detection of a breach in the prepotent response mode (right OFC.

  14. Predictors of Errors of Novice Java Programmers

    Science.gov (United States)

    Bringula, Rex P.; Manabat, Geecee Maybelline A.; Tolentino, Miguel Angelo A.; Torres, Edmon L.

    2012-01-01

    This descriptive study determined which of the sources of errors would predict the errors committed by novice Java programmers. Descriptive statistics revealed that the respondents perceived that they committed the identified eighteen errors infrequently. Thought error was perceived to be the main source of error during the laboratory programming…

  15. On the group approximation errors in description of neutron slowing-down at large distances from a source. Diffusion approach

    International Nuclear Information System (INIS)

    Kulakovskij, M.Ya.; Savitskij, V.I.

    1981-01-01

    The errors of multigroup calculating the neutron flux spatial and energy distribution in the fast reactor shield caused by using group and age approximations are considered. It is shown that at small distances from a source the age theory rather well describes the distribution of the slowing-down density. With the distance increase the age approximation leads to underestimating the neutron fluxes, and the error quickly increases at that. At small distances from the source (up to 15 lengths of free path in graphite) the multigroup diffusion approximation describes the distribution of slowing down density quite satisfactorily and at that the results almost do not depend on the number of groups. With the distance increase the multigroup diffusion calculations lead to considerable overestimating of the slowing-down density. The conclusion is drawn that the group approximation proper errors are opposite in sign to the error introduced by the age approximation and to some extent compensate each other

  16. Magnetic Nanoparticle Thermometer: An Investigation of Minimum Error Transmission Path and AC Bias Error

    Directory of Open Access Journals (Sweden)

    Zhongzhou Du

    2015-04-01

    Full Text Available The signal transmission module of a magnetic nanoparticle thermometer (MNPT was established in this study to analyze the error sources introduced during the signal flow in the hardware system. The underlying error sources that significantly affected the precision of the MNPT were determined through mathematical modeling and simulation. A transfer module path with the minimum error in the hardware system was then proposed through the analysis of the variations of the system error caused by the significant error sources when the signal flew through the signal transmission module. In addition, a system parameter, named the signal-to-AC bias ratio (i.e., the ratio between the signal and AC bias, was identified as a direct determinant of the precision of the measured temperature. The temperature error was below 0.1 K when the signal-to-AC bias ratio was higher than 80 dB, and other system errors were not considered. The temperature error was below 0.1 K in the experiments with a commercial magnetic fluid (Sample SOR-10, Ocean Nanotechnology, Springdale, AR, USA when the hardware system of the MNPT was designed with the aforementioned method.

  17. On the errors in measurements of Ohio 5 radio sources in the light of the GB survey

    International Nuclear Information System (INIS)

    Machalski, J.

    1975-01-01

    Positions and flux densities of 405 OSU 5 radio sources surveyed at 1415 MHz down to 0.18 f.u. (Brundage et al. 1971) have been examined in the light of data from the GB survey made at 1400 MHz (Maslowski 1972). An identification analysis has shown that about 56% of OSU sources reveal themselves as single, 18% - as confused, 20% - as unresolved and 6% - having no counterparts in the GB survey down to 0.09 f.u. - seem to be spurious. The single OSU sources are strongly affected by the underestimation of their flux densities due to base-line procedure in their vicinity. The average value of about 0.03 f.u. has been found for the systematic underestimation. The second systematic error is due to the presence of a significant number of confused sources with strong overestimation of their flux densities. The confusion effect gives a characteristic non-Gaussian tail in the difference distribution between observed and real flux densities. The confusion effect has a strong influence on source counts from the OSU 5 survey. Differential number-counts relatively to that from the GB survey shows that the counts agree between themselves within the statistical uncertainty up to about 0.40 f.u., which is approximately 4 delta (delta - average rms flux density error in the OSU 5 survey). Below 0.40 f.u. the number of sources missing due to the confusion effect is significantly greater than the number-overestimation due to the noise error. Thus, this part of the OSU 5 source counts cannot be treated seriously, even in the statistical sense. An analysis of the approximate reliability and completeness of the OSU 5 survey shows that, although the total reliability estimated by the authors of the survey is good, the completeness is significantly lower due to the underestimation of the confusion effect magnitude. In fact, the OSU 5 completeness is 67% at 0.18 f.u. and 79% at 0.25 f.u. (author)

  18. Error Detection and Error Classification: Failure Awareness in Data Transfer Scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Louisiana State University; Balman, Mehmet; Kosar, Tevfik

    2010-10-27

    Data transfer in distributed environment is prone to frequent failures resulting from back-end system level problems, like connectivity failure which is technically untraceable by users. Error messages are not logged efficiently, and sometimes are not relevant/useful from users point-of-view. Our study explores the possibility of an efficient error detection and reporting system for such environments. Prior knowledge about the environment and awareness of the actual reason behind a failure would enable higher level planners to make better and accurate decisions. It is necessary to have well defined error detection and error reporting methods to increase the usability and serviceability of existing data transfer protocols and data management systems. We investigate the applicability of early error detection and error classification techniques and propose an error reporting framework and a failure-aware data transfer life cycle to improve arrangement of data transfer operations and to enhance decision making of data transfer schedulers.

  19. Investigating the error sources of the online state of charge estimation methods for lithium-ion batteries in electric vehicles

    Science.gov (United States)

    Zheng, Yuejiu; Ouyang, Minggao; Han, Xuebing; Lu, Languang; Li, Jianqiu

    2018-02-01

    Sate of charge (SOC) estimation is generally acknowledged as one of the most important functions in battery management system for lithium-ion batteries in new energy vehicles. Though every effort is made for various online SOC estimation methods to reliably increase the estimation accuracy as much as possible within the limited on-chip resources, little literature discusses the error sources for those SOC estimation methods. This paper firstly reviews the commonly studied SOC estimation methods from a conventional classification. A novel perspective focusing on the error analysis of the SOC estimation methods is proposed. SOC estimation methods are analyzed from the views of the measured values, models, algorithms and state parameters. Subsequently, the error flow charts are proposed to analyze the error sources from the signal measurement to the models and algorithms for the widely used online SOC estimation methods in new energy vehicles. Finally, with the consideration of the working conditions, choosing more reliable and applicable SOC estimation methods is discussed, and the future development of the promising online SOC estimation methods is suggested.

  20. Summary of mirror experiments relevant to beam-plasma neutron source

    International Nuclear Information System (INIS)

    Molvik, A.W.

    1988-01-01

    A promising design for a deuterium-tritium (DT) neutron source is based on the injection of neutral beams into a dense, warm plasma column. Its purpose is to test materials for possible use in fusion reactors. A series of designs have evolved, from a 4-T version to an 8-T version. Intense fluxes of 5--10 MW/m 2 is achieved at the plasma surface, sufficient to complete end-of-life tests in one to two years. In this report, we review data from earlier mirror experiments that are relevant to such neutron sources. Most of these data are from 2XIIB, which was the only facility to ever inject 5 MW of neutral beams into a single mirror call. The major physics issues for a beam-plasma neutron source are magnetohydrodynamic (MHD) equilibrium and stability, microstability, startup, cold-ion fueling of the midplane to allow two-component reactions, and operation in the Spitzer conduction regime, where the power is removed to the ends by an axial gradient in the electron temperature T/sub e/. We show in this report that the conditions required for a neutron source have now been demonstrated in experiments. 20 refs., 15 figs., 3 tabs

  1. The Relevance of Second Language Acquisition Theory to the Written Error Correction Debate

    Science.gov (United States)

    Polio, Charlene

    2012-01-01

    The controversies surrounding written error correction can be traced to Truscott (1996) in his polemic against written error correction. He claimed that empirical studies showed that error correction was ineffective and that this was to be expected "given the nature of the correction process and "the nature of language learning" (p. 328, emphasis…

  2. Dipole estimation errors due to not incorporating anisotropic conductivities in realistic head models for EEG source analysis

    Science.gov (United States)

    Hallez, Hans; Staelens, Steven; Lemahieu, Ignace

    2009-10-01

    EEG source analysis is a valuable tool for brain functionality research and for diagnosing neurological disorders, such as epilepsy. It requires a geometrical representation of the human head or a head model, which is often modeled as an isotropic conductor. However, it is known that some brain tissues, such as the skull or white matter, have an anisotropic conductivity. Many studies reported that the anisotropic conductivities have an influence on the calculated electrode potentials. However, few studies have assessed the influence of anisotropic conductivities on the dipole estimations. In this study, we want to determine the dipole estimation errors due to not taking into account the anisotropic conductivities of the skull and/or brain tissues. Therefore, head models are constructed with the same geometry, but with an anisotropically conducting skull and/or brain tissue compartment. These head models are used in simulation studies where the dipole location and orientation error is calculated due to neglecting anisotropic conductivities of the skull and brain tissue. Results show that not taking into account the anisotropic conductivities of the skull yields a dipole location error between 2 and 25 mm, with an average of 10 mm. When the anisotropic conductivities of the brain tissues are neglected, the dipole location error ranges between 0 and 5 mm. In this case, the average dipole location error was 2.3 mm. In all simulations, the dipole orientation error was smaller than 10°. We can conclude that the anisotropic conductivities of the skull have to be incorporated to improve the accuracy of EEG source analysis. The results of the simulation, as presented here, also suggest that incorporation of the anisotropic conductivities of brain tissues is not necessary. However, more studies are needed to confirm these suggestions.

  3. Dipole estimation errors due to not incorporating anisotropic conductivities in realistic head models for EEG source analysis

    International Nuclear Information System (INIS)

    Hallez, Hans; Staelens, Steven; Lemahieu, Ignace

    2009-01-01

    EEG source analysis is a valuable tool for brain functionality research and for diagnosing neurological disorders, such as epilepsy. It requires a geometrical representation of the human head or a head model, which is often modeled as an isotropic conductor. However, it is known that some brain tissues, such as the skull or white matter, have an anisotropic conductivity. Many studies reported that the anisotropic conductivities have an influence on the calculated electrode potentials. However, few studies have assessed the influence of anisotropic conductivities on the dipole estimations. In this study, we want to determine the dipole estimation errors due to not taking into account the anisotropic conductivities of the skull and/or brain tissues. Therefore, head models are constructed with the same geometry, but with an anisotropically conducting skull and/or brain tissue compartment. These head models are used in simulation studies where the dipole location and orientation error is calculated due to neglecting anisotropic conductivities of the skull and brain tissue. Results show that not taking into account the anisotropic conductivities of the skull yields a dipole location error between 2 and 25 mm, with an average of 10 mm. When the anisotropic conductivities of the brain tissues are neglected, the dipole location error ranges between 0 and 5 mm. In this case, the average dipole location error was 2.3 mm. In all simulations, the dipole orientation error was smaller than 10 deg. We can conclude that the anisotropic conductivities of the skull have to be incorporated to improve the accuracy of EEG source analysis. The results of the simulation, as presented here, also suggest that incorporation of the anisotropic conductivities of brain tissues is not necessary. However, more studies are needed to confirm these suggestions.

  4. Field errors in hybrid insertion devices

    International Nuclear Information System (INIS)

    Schlueter, R.D.

    1995-02-01

    Hybrid magnet theory as applied to the error analyses used in the design of Advanced Light Source (ALS) insertion devices is reviewed. Sources of field errors in hybrid insertion devices are discussed

  5. Field errors in hybrid insertion devices

    Energy Technology Data Exchange (ETDEWEB)

    Schlueter, R.D. [Lawrence Berkeley Lab., CA (United States)

    1995-02-01

    Hybrid magnet theory as applied to the error analyses used in the design of Advanced Light Source (ALS) insertion devices is reviewed. Sources of field errors in hybrid insertion devices are discussed.

  6. Characterization of the main error sources of chromatic confocal probes for dimensional measurement

    International Nuclear Information System (INIS)

    Nouira, H; El-Hayek, N; Yuan, X; Anwer, N

    2014-01-01

    Chromatic confocal probes are increasingly used in high-precision dimensional metrology applications such as roughness, form, thickness and surface profile measurements; however, their measurement behaviour is not well understood and must be characterized at a nanometre level. This paper provides a calibration bench for the characterization of two chromatic confocal probes of 20 and 350 µm travel ranges. The metrology loop that includes the chromatic confocal probe is stable and enables measurement repeatability at the nanometer level. With the proposed system, the major error sources, such as the relative axial and radial motions of the probe with respect to the sample, the material, colour and roughness of the measured sample, the relative deviation/tilt of the probe and the scanning speed are identified. Experimental test results show that the chromatic confocal probes are sensitive to these errors and that their measurement behaviour is highly dependent on them. (paper)

  7. Dimensional Error in Rapid Prototyping with Open Source Software and Low-cost 3D-printer.

    Science.gov (United States)

    Rendón-Medina, Marco A; Andrade-Delgado, Laura; Telich-Tarriba, Jose E; Fuente-Del-Campo, Antonio; Altamirano-Arcos, Carlos A

    2018-01-01

    Rapid prototyping models (RPMs) had been extensively used in craniofacial and maxillofacial surgery, especially in areas such as orthognathic surgery, posttraumatic or oncological reconstructions, and implantology. Economic limitations are higher in developing countries such as Mexico, where resources dedicated to health care are limited, therefore limiting the use of RPM to few selected centers. This article aims to determine the dimensional error of a low-cost fused deposition modeling 3D printer (Tronxy P802MA, Shenzhen, Tronxy Technology Co), with Open source software. An ordinary dry human mandible was scanned with a computed tomography device. The data were processed with open software to build a rapid prototype with a fused deposition machine. Linear measurements were performed to find the mean absolute and relative difference. The mean absolute and relative difference was 0.65 mm and 1.96%, respectively ( P = 0.96). Low-cost FDM machines and Open Source Software are excellent options to manufacture RPM, with the benefit of low cost and a similar relative error than other more expensive technologies.

  8. Dimensional Error in Rapid Prototyping with Open Source Software and Low-cost 3D-printer

    Directory of Open Access Journals (Sweden)

    Marco A. Rendón-Medina

    2018-01-01

    Full Text Available Summary:. Rapid prototyping models (RPMs had been extensively used in craniofacial and maxillofacial surgery, especially in areas such as orthognathic surgery, posttraumatic or oncological reconstructions, and implantology. Economic limitations are higher in developing countries such as Mexico, where resources dedicated to health care are limited, therefore limiting the use of RPM to few selected centers. This article aims to determine the dimensional error of a low-cost fused deposition modeling 3D printer (Tronxy P802MA, Shenzhen, Tronxy Technology Co, with Open source software. An ordinary dry human mandible was scanned with a computed tomography device. The data were processed with open software to build a rapid prototype with a fused deposition machine. Linear measurements were performed to find the mean absolute and relative difference. The mean absolute and relative difference was 0.65 mm and 1.96%, respectively (P = 0.96. Low-cost FDM machines and Open Source Software are excellent options to manufacture RPM, with the benefit of low cost and a similar relative error than other more expensive technologies.

  9. Assessment of the sources of error affecting the quantitative accuracy of SPECT imaging in small animals

    Energy Technology Data Exchange (ETDEWEB)

    Joint Graduate Group in Bioengineering, University of California, San Francisco and University of California, Berkeley; Department of Radiology, University of California; Gullberg, Grant T; Hwang, Andrew B.; Franc, Benjamin L.; Gullberg, Grant T.; Hasegawa, Bruce H.

    2008-02-15

    Small animal SPECT imaging systems have multiple potential applications in biomedical research. Whereas SPECT data are commonly interpreted qualitatively in a clinical setting, the ability to accurately quantify measurements will increase the utility of the SPECT data for laboratory measurements involving small animals. In this work, we assess the effect of photon attenuation, scatter and partial volume errors on the quantitative accuracy of small animal SPECT measurements, first with Monte Carlo simulation and then confirmed with experimental measurements. The simulations modeled the imaging geometry of a commercially available small animal SPECT system. We simulated the imaging of a radioactive source within a cylinder of water, and reconstructed the projection data using iterative reconstruction algorithms. The size of the source and the size of the surrounding cylinder were varied to evaluate the effects of photon attenuation and scatter on quantitative accuracy. We found that photon attenuation can reduce the measured concentration of radioactivity in a volume of interest in the center of a rat-sized cylinder of water by up to 50percent when imaging with iodine-125, and up to 25percent when imaging with technetium-99m. When imaging with iodine-125, the scatter-to-primary ratio can reach up to approximately 30percent, and can cause overestimation of the radioactivity concentration when reconstructing data with attenuation correction. We varied the size of the source to evaluate partial volume errors, which we found to be a strong function of the size of the volume of interest and the spatial resolution. These errors can result in large (>50percent) changes in the measured amount of radioactivity. The simulation results were compared with and found to agree with experimental measurements. The inclusion of attenuation correction in the reconstruction algorithm improved quantitative accuracy. We also found that an improvement of the spatial resolution through the

  10. Dual Processing and Diagnostic Errors

    Science.gov (United States)

    Norman, Geoff

    2009-01-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…

  11. Imagery encoding and false recognition errors: Examining the role of imagery process and imagery content on source misattributions.

    Science.gov (United States)

    Foley, Mary Ann; Foy, Jeffrey; Schlemmer, Emily; Belser-Ehrlich, Janna

    2010-11-01

    Imagery encoding effects on source-monitoring errors were explored using the Deese-Roediger-McDermott paradigm in two experiments. While viewing thematically related lists embedded in mixed picture/word presentations, participants were asked to generate images of objects or words (Experiment 1) or to simply name the items (Experiment 2). An encoding task intended to induce spontaneous images served as a control for the explicit imagery instruction conditions (Experiment 1). On the picture/word source-monitoring tests, participants were much more likely to report "seeing" a picture of an item presented as a word than the converse particularly when images were induced spontaneously. However, this picture misattribution error was reversed after generating images of words (Experiment 1) and was eliminated after simply labelling the items (Experiment 2). Thus source misattributions were sensitive to the processes giving rise to imagery experiences (spontaneous vs deliberate), the kinds of images generated (object vs word images), and the ways in which materials were presented (as pictures vs words).

  12. Imprecision in waggle dances of the honeybee (Apis mellifera) for nearby food sources : error or adaptation?

    OpenAIRE

    Weidenmüller, Anja; Seeley, Thomas

    1999-01-01

    A curious feature of the honeybee's waggle dance is the imprecision in the direction indication for nearby food sources. One hypothesis for the function of this imprecision is that it serves to spread recruits over a certain area and thus is an adaptation to the typical spatial configuration of the bees' food sources, i.e., flowers in sizable patches. We report an experiment that tests this tuned-error hypothesis. We measured the precision of direction indication in waggle dances advertising ...

  13. Measurement error in mobile source air pollution exposure estimates due to residential mobility during pregnancy.

    Science.gov (United States)

    Pennington, Audrey Flak; Strickland, Matthew J; Klein, Mitchel; Zhai, Xinxin; Russell, Armistead G; Hansen, Craig; Darrow, Lyndsey A

    2017-09-01

    Prenatal air pollution exposure is frequently estimated using maternal residential location at the time of delivery as a proxy for residence during pregnancy. We describe residential mobility during pregnancy among 19,951 children from the Kaiser Air Pollution and Pediatric Asthma Study, quantify measurement error in spatially resolved estimates of prenatal exposure to mobile source fine particulate matter (PM 2.5 ) due to ignoring this mobility, and simulate the impact of this error on estimates of epidemiologic associations. Two exposure estimates were compared, one calculated using complete residential histories during pregnancy (weighted average based on time spent at each address) and the second calculated using only residence at birth. Estimates were computed using annual averages of primary PM 2.5 from traffic emissions modeled using a Research LINE-source dispersion model for near-surface releases (RLINE) at 250 m resolution. In this cohort, 18.6% of children were born to mothers who moved at least once during pregnancy. Mobile source PM 2.5 exposure estimates calculated using complete residential histories during pregnancy and only residence at birth were highly correlated (r S >0.9). Simulations indicated that ignoring residential mobility resulted in modest bias of epidemiologic associations toward the null, but varied by maternal characteristics and prenatal exposure windows of interest (ranging from -2% to -10% bias).

  14. Analysis of field errors in existing undulators

    International Nuclear Information System (INIS)

    Kincaid, B.M.

    1990-01-01

    The Advanced Light Source (ALS) and other third generation synchrotron light sources have been designed for optimum performance with undulator insertion devices. The performance requirements for these new undulators are explored, with emphasis on the effects of errors on source spectral brightness. Analysis of magnetic field data for several existing hybrid undulators is presented, decomposing errors into systematic and random components. An attempts is made to identify the sources of these errors, and recommendations are made for designing future insertion devices. 12 refs., 16 figs

  15. Clock error models for simulation and estimation

    International Nuclear Information System (INIS)

    Meditch, J.S.

    1981-10-01

    Mathematical models for the simulation and estimation of errors in precision oscillators used as time references in satellite navigation systems are developed. The results, based on all currently known oscillator error sources, are directly implementable on a digital computer. The simulation formulation is sufficiently flexible to allow for the inclusion or exclusion of individual error sources as desired. The estimation algorithms, following from Kalman filter theory, provide directly for the error analysis of clock errors in both filtering and prediction

  16. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  17. Study on analysis from sources of error for Airborne LIDAR

    Science.gov (United States)

    Ren, H. C.; Yan, Q.; Liu, Z. J.; Zuo, Z. Q.; Xu, Q. Q.; Li, F. F.; Song, C.

    2016-11-01

    With the advancement of Aerial Photogrammetry, it appears that to obtain geo-spatial information of high spatial and temporal resolution provides a new technical means for Airborne LIDAR measurement techniques, with unique advantages and broad application prospects. Airborne LIDAR is increasingly becoming a new kind of space for earth observation technology, which is mounted by launching platform for aviation, accepting laser pulses to get high-precision, high-density three-dimensional coordinate point cloud data and intensity information. In this paper, we briefly demonstrates Airborne laser radar systems, and that some errors about Airborne LIDAR data sources are analyzed in detail, so the corresponding methods is put forwarded to avoid or eliminate it. Taking into account the practical application of engineering, some recommendations were developed for these designs, which has crucial theoretical and practical significance in Airborne LIDAR data processing fields.

  18. Einstein's error

    International Nuclear Information System (INIS)

    Winterflood, A.H.

    1980-01-01

    In discussing Einstein's Special Relativity theory it is claimed that it violates the principle of relativity itself and that an anomalous sign in the mathematics is found in the factor which transforms one inertial observer's measurements into those of another inertial observer. The apparent source of this error is discussed. Having corrected the error a new theory, called Observational Kinematics, is introduced to replace Einstein's Special Relativity. (U.K.)

  19. [Responsibility due to medication errors in France: a study based on SHAM insurance data].

    Science.gov (United States)

    Theissen, A; Orban, J-C; Fuz, F; Guerin, J-P; Flavin, P; Albertini, S; Maricic, S; Saquet, D; Niccolai, P

    2015-03-01

    The safe medication practices at the hospital constitute a major public health problem. Drug supply chain is a complex process, potentially source of errors and damages for the patient. SHAM insurances are the biggest French provider of medical liability insurances and a relevant source of data on the health care complications. The main objective of the study was to analyze the type and cause of medication errors declared to SHAM and having led to a conviction by a court. We did a retrospective study on insurance claims provided by SHAM insurances with a medication error and leading to a condemnation over a 6-year period (between 2005 and 2010). Thirty-one cases were analysed, 21 for scheduled activity and 10 for emergency activity. Consequences of claims were mostly serious (12 deaths, 14 serious complications, 5 simple complications). The types of medication errors were a drug monitoring error (11 cases), an administration error (5 cases), an overdose (6 cases), an allergy (4 cases), a contraindication (3 cases) and an omission (2 cases). Intravenous route of administration was involved in 19 of 31 cases (61%). The causes identified by the court expert were an error related to service organization (11), an error related to medical practice (11) or nursing practice (13). Only one claim was due to the hospital pharmacy. The claim related to drug supply chain is infrequent but potentially serious. These data should help strengthen quality approach in risk management. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  20. SPACE-BORNE LASER ALTIMETER GEOLOCATION ERROR ANALYSIS

    Directory of Open Access Journals (Sweden)

    Y. Wang

    2018-05-01

    Full Text Available This paper reviews the development of space-borne laser altimetry technology over the past 40 years. Taking the ICESAT satellite as an example, a rigorous space-borne laser altimeter geolocation model is studied, and an error propagation equation is derived. The influence of the main error sources, such as the platform positioning error, attitude measurement error, pointing angle measurement error and range measurement error, on the geolocation accuracy of the laser spot are analysed by simulated experiments. The reasons for the different influences on geolocation accuracy in different directions are discussed, and to satisfy the accuracy of the laser control point, a design index for each error source is put forward.

  1. Accounting for optical errors in microtensiometry.

    Science.gov (United States)

    Hinton, Zachary R; Alvarez, Nicolas J

    2018-09-15

    Drop shape analysis (DSA) techniques measure interfacial tension subject to error in image analysis and the optical system. While considerable efforts have been made to minimize image analysis errors, very little work has treated optical errors. There are two main sources of error when considering the optical system: the angle of misalignment and the choice of focal plane. Due to the convoluted nature of these sources, small angles of misalignment can lead to large errors in measured curvature. We demonstrate using microtensiometry the contributions of these sources to measured errors in radius, and, more importantly, deconvolute the effects of misalignment and focal plane. Our findings are expected to have broad implications on all optical techniques measuring interfacial curvature. A geometric model is developed to analytically determine the contributions of misalignment angle and choice of focal plane on measurement error for spherical cap interfaces. This work utilizes a microtensiometer to validate the geometric model and to quantify the effect of both sources of error. For the case of a microtensiometer, an empirical calibration is demonstrated that corrects for optical errors and drastically simplifies implementation. The combination of geometric modeling and experimental results reveal a convoluted relationship between the true and measured interfacial radius as a function of the misalignment angle and choice of focal plane. The validated geometric model produces a full operating window that is strongly dependent on the capillary radius and spherical cap height. In all cases, the contribution of optical errors is minimized when the height of the spherical cap is equivalent to the capillary radius, i.e. a hemispherical interface. The understanding of these errors allow for correct measure of interfacial curvature and interfacial tension regardless of experimental setup. For the case of microtensiometry, this greatly decreases the time for experimental setup

  2. Rotational error in path integration: encoding and execution errors in angle reproduction.

    Science.gov (United States)

    Chrastil, Elizabeth R; Warren, William H

    2017-06-01

    Path integration is fundamental to human navigation. When a navigator leaves home on a complex outbound path, they are able to keep track of their approximate position and orientation and return to their starting location on a direct homebound path. However, there are several sources of error during path integration. Previous research has focused almost exclusively on encoding error-the error in registering the outbound path in memory. Here, we also consider execution error-the error in the response, such as turning and walking a homebound trajectory. In two experiments conducted in ambulatory virtual environments, we examined the contribution of execution error to the rotational component of path integration using angle reproduction tasks. In the reproduction tasks, participants rotated once and then rotated again to face the original direction, either reproducing the initial turn or turning through the supplementary angle. One outstanding difficulty in disentangling encoding and execution error during a typical angle reproduction task is that as the encoding angle increases, so does the required response angle. In Experiment 1, we dissociated these two variables by asking participants to report each encoding angle using two different responses: by turning to walk on a path parallel to the initial facing direction in the same (reproduction) or opposite (supplementary angle) direction. In Experiment 2, participants reported the encoding angle by turning both rightward and leftward onto a path parallel to the initial facing direction, over a larger range of angles. The results suggest that execution error, not encoding error, is the predominant source of error in angular path integration. These findings also imply that the path integrator uses an intrinsic (action-scaled) rather than an extrinsic (objective) metric.

  3. Uncertainty quantification and error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  4. Quantification and handling of sampling errors in instrumental measurements: a case study

    DEFF Research Database (Denmark)

    Andersen, Charlotte Møller; Bro, R.

    2004-01-01

    in certain situations, the effect of systematic errors is also considerable. The relevant errors contributing to the prediction error are: error in instrumental measurements (x-error), error in reference measurements (y-error), error in the estimated calibration model (regression coefficient error) and model...

  5. The U.S. Navy's Global Wind-Wave Models: An Investigation into Sources of Errors in Low-Frequency Energy Predictions

    National Research Council Canada - National Science Library

    Rogers, W

    2002-01-01

    This report describes an investigation to determine the relative importance of various sources of error in the two global-scale models of wind-generated surface waves used operationally by the U.S. Navy...

  6. Error Mitigation for Short-Depth Quantum Circuits

    Science.gov (United States)

    Temme, Kristan; Bravyi, Sergey; Gambetta, Jay M.

    2017-11-01

    Two schemes are presented that mitigate the effect of errors and decoherence in short-depth quantum circuits. The size of the circuits for which these techniques can be applied is limited by the rate at which the errors in the computation are introduced. Near-term applications of early quantum devices, such as quantum simulations, rely on accurate estimates of expectation values to become relevant. Decoherence and gate errors lead to wrong estimates of the expectation values of observables used to evaluate the noisy circuit. The two schemes we discuss are deliberately simple and do not require additional qubit resources, so to be as practically relevant in current experiments as possible. The first method, extrapolation to the zero noise limit, subsequently cancels powers of the noise perturbations by an application of Richardson's deferred approach to the limit. The second method cancels errors by resampling randomized circuits according to a quasiprobability distribution.

  7. Putting into practice error management theory: Unlearning and learning to manage action errors in construction.

    Science.gov (United States)

    Love, Peter E D; Smith, Jim; Teo, Pauline

    2018-05-01

    Error management theory is drawn upon to examine how a project-based organization, which took the form of a program alliance, was able to change its established error prevention mindset to one that enacted a learning mindfulness that provided an avenue to curtail its action errors. The program alliance was required to unlearn its existing routines and beliefs to accommodate the practices required to embrace error management. As a result of establishing an error management culture the program alliance was able to create a collective mindfulness that nurtured learning and supported innovation. The findings provide a much-needed context to demonstrate the relevance of error management theory to effectively address rework and safety problems in construction projects. The robust theoretical underpinning that is grounded in practice and presented in this paper provides a mechanism to engender learning from errors, which can be utilized by construction organizations to improve the productivity and performance of their projects. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Analysis of strain error sources in micro-beam Laue diffraction

    International Nuclear Information System (INIS)

    Hofmann, Felix; Eve, Sophie; Belnoue, Jonathan; Micha, Jean-Sébastien; Korsunsky, Alexander M.

    2011-01-01

    Micro-beam Laue diffraction is an experimental method that allows the measurement of local lattice orientation and elastic strain within individual grains of engineering alloys, ceramics, and other polycrystalline materials. Unlike other analytical techniques, e.g. based on electron microscopy, it is not limited to surface characterisation or thin sections, but rather allows non-destructive measurements in the material bulk. This is of particular importance for in situ loading experiments where the mechanical response of a material volume (rather than just surface) is studied and it is vital that no perturbation/disturbance is introduced by the measurement technique. Whilst the technique allows lattice orientation to be determined to a high level of precision, accurate measurement of elastic strains and estimating the errors involved is a significant challenge. We propose a simulation-based approach to assess the elastic strain errors that arise from geometrical perturbations of the experimental setup. Using an empirical combination rule, the contributions of different geometrical uncertainties to the overall experimental strain error are estimated. This approach was applied to the micro-beam Laue diffraction setup at beamline BM32 at the European Synchrotron Radiation Facility (ESRF). Using a highly perfect germanium single crystal, the mechanical stability of the instrument was determined and hence the expected strain errors predicted. Comparison with the actual strain errors found in a silicon four-point beam bending test showed good agreement. The simulation-based error analysis approach makes it possible to understand the origins of the experimental strain errors and thus allows a directed improvement of the experimental geometry to maximise the benefit in terms of strain accuracy.

  9. Back to the basics: Identifying and addressing underlying challenges in achieving high quality and relevant health statistics for indigenous populations in Canada.

    Science.gov (United States)

    Smylie, Janet; Firestone, Michelle

    Canada is known internationally for excellence in both the quality and public policy relevance of its health and social statistics. There is a double standard however with respect to the relevance and quality of statistics for Indigenous populations in Canada. Indigenous specific health and social statistics gathering is informed by unique ethical, rights-based, policy and practice imperatives regarding the need for Indigenous participation and leadership in Indigenous data processes throughout the spectrum of indicator development, data collection, management, analysis and use. We demonstrate how current Indigenous data quality challenges including misclassification errors and non-response bias systematically contribute to a significant underestimate of inequities in health determinants, health status, and health care access between Indigenous and non-Indigenous people in Canada. The major quality challenge underlying these errors and biases is the lack of Indigenous specific identifiers that are consistent and relevant in major health and social data sources. The recent removal of an Indigenous identity question from the Canadian census has resulted in further deterioration of an already suboptimal system. A revision of core health data sources to include relevant, consistent, and inclusive Indigenous self-identification is urgently required. These changes need to be carried out in partnership with Indigenous peoples and their representative and governing organizations.

  10. Error-free versus mutagenic processing of genomic uracil--relevance to cancer.

    Science.gov (United States)

    Krokan, Hans E; Sætrom, Pål; Aas, Per Arne; Pettersen, Henrik Sahlin; Kavli, Bodil; Slupphaug, Geir

    2014-07-01

    Genomic uracil is normally processed essentially error-free by base excision repair (BER), with mismatch repair (MMR) as an apparent backup for U:G mismatches. Nuclear uracil-DNA glycosylase UNG2 is the major enzyme initiating BER of uracil of U:A pairs as well as U:G mismatches. Deficiency in UNG2 results in several-fold increases in genomic uracil in mammalian cells. Thus, the alternative uracil-removing glycosylases, SMUG1, TDG and MBD4 cannot efficiently complement UNG2-deficiency. A major function of SMUG1 is probably to remove 5-hydroxymethyluracil from DNA with general back-up for UNG2 as a minor function. TDG and MBD4 remove deamination products U or T mismatched to G in CpG/mCpG contexts, but may have equally or more important functions in development, epigenetics and gene regulation. Genomic uracil was previously thought to arise only from spontaneous cytosine deamination and incorporation of dUMP, generating U:G mismatches and U:A pairs, respectively. However, the identification of activation-induced cytidine deaminase (AID) and other APOBEC family members as DNA-cytosine deaminases has spurred renewed interest in the processing of genomic uracil. Importantly, AID triggers the adaptive immune response involving error-prone processing of U:G mismatches, but also contributes to B-cell lymphomagenesis. Furthermore, mutational signatures in a substantial fraction of other human cancers are consistent with APOBEC-induced mutagenesis, with U:G mismatches as prime suspects. Mutations can be caused by replicative polymerases copying uracil in U:G mismatches, or by translesion polymerases that insert incorrect bases opposite abasic sites after uracil-removal. In addition, kataegis, localized hypermutations in one strand in the vicinity of genomic rearrangements, requires APOBEC protein, UNG2 and translesion polymerase REV1. What mechanisms govern error-free versus error prone processing of uracil in DNA remains unclear. In conclusion, genomic uracil is an

  11. Soft X-ray sources and their optical counterparts in the error box of the COS-B source 2CG 135+01

    Energy Technology Data Exchange (ETDEWEB)

    Caraveo, P A; Bignami, G F [Consiglio Nazionale delle Ricerche, Milan (Italy). Lab. di Fisica Cosmica e Tecnologie Relative; Paul, J A [CEA Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France). Section d' Astrophysique; Marano, B [Bologna Univ. (Italy). Ist. di Astronomia; Vettolani, G P [Consiglio Nazionale delle Ricerche, Bologna (Italy). Lab. di Radioastronomia

    1981-01-01

    We shall present here the Einstein observations for the 2CG 135+01 region where the results are complete in the sense that we have a satisfactory coverage of the COS-B error box and, more important, that all the IPC sources found have been identified, through both HRI and optical observations. In particular, the new spectral classifications of the present work were obtained at the Lojano Observatory (Bologna, Italy) with the Boller and Chivens spectrograph at the Cassegrain focus of the 1.52 in telescope. The spectral dispersion is 80 A/mm.

  12. Understanding human management of automation errors

    Science.gov (United States)

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042

  13. Average beta-beating from random errors

    CERN Document Server

    Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department

    2018-01-01

    The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic effect on the tune.

  14. Clinical relevance of pharmacist intervention in an emergency department.

    Science.gov (United States)

    Pérez-Moreno, Maria Antonia; Rodríguez-Camacho, Juan Manuel; Calderón-Hernanz, Beatriz; Comas-Díaz, Bernardino; Tarradas-Torras, Jordi

    2017-08-01

    To evaluate the clinical relevance of pharmacist intervention on patient care in emergencies, to determine the severity of detected errors. Second, to analyse the most frequent types of interventions and type of drugs involved and to evaluate the clinical pharmacist's activity. A 6-month observational prospective study of pharmacist intervention in the Emergency Department (ED) at a 400-bed hospital in Spain was performed to record interventions carried out by the clinical pharmacists. We determined whether the intervention occurred in the process of medication reconciliation or another activity, and whether the drug involved belonged to the High-Alert Medications Institute for Safe Medication Practices (ISMP) list. To evaluate the severity of the errors detected and clinical relevance of the pharmacist intervention, a modified assessment scale of Overhage and Lukes was used. Relationship between clinical relevance of pharmacist intervention and the severity of medication errors was assessed using ORs and Spearman's correlation coefficient. During the observation period, pharmacists reviewed the pharmacotherapy history and medication orders of 2984 patients. A total of 991 interventions were recorded in 557 patients; 67.2% of the errors were detected during medication reconciliation. Medication errors were considered severe in 57.2% of cases and 64.9% of pharmacist intervention were considered relevant. About 10.9% of the drugs involved are in the High-Alert Medications ISMP list. The severity of the medication error and the clinical significance of the pharmacist intervention were correlated (Spearman's ρ=0.728/pclinical pharmacists identified and intervened on a high number of severe medication errors. This suggests that emergency services will benefit from pharmacist-provided drug therapy services. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  15. Operator- and software-related post-experimental variability and source of error in 2-DE analysis.

    Science.gov (United States)

    Millioni, Renato; Puricelli, Lucia; Sbrignadello, Stefano; Iori, Elisabetta; Murphy, Ellen; Tessari, Paolo

    2012-05-01

    In the field of proteomics, several approaches have been developed for separating proteins and analyzing their differential relative abundance. One of the oldest, yet still widely used, is 2-DE. Despite the continuous advance of new methods, which are less demanding from a technical standpoint, 2-DE is still compelling and has a lot of potential for improvement. The overall variability which affects 2-DE includes biological, experimental, and post-experimental (software-related) variance. It is important to highlight how much of the total variability of this technique is due to post-experimental variability, which, so far, has been largely neglected. In this short review, we have focused on this topic and explained that post-experimental variability and source of error can be further divided into those which are software-dependent and those which are operator-dependent. We discuss these issues in detail, offering suggestions for reducing errors that may affect the quality of results, summarizing the advantages and drawbacks of each approach.

  16. Estimating the relevance of world disturbances to explain savings, interference and long-term motor adaptation effects.

    Directory of Open Access Journals (Sweden)

    Max Berniker

    2011-10-01

    Full Text Available Recent studies suggest that motor adaptation is the result of multiple, perhaps linear processes each with distinct time scales. While these models are consistent with some motor phenomena, they can neither explain the relatively fast re-adaptation after a long washout period, nor savings on a subsequent day. Here we examined if these effects can be explained if we assume that the CNS stores and retrieves movement parameters based on their possible relevance. We formalize this idea with a model that infers not only the sources of potential motor errors, but also their relevance to the current motor circumstances. In our model adaptation is the process of re-estimating parameters that represent the body and the world. The likelihood of a world parameter being relevant is then based on the mismatch between an observed movement and that predicted when not compensating for the estimated world disturbance. As such, adapting to large motor errors in a laboratory setting should alert subjects that disturbances are being imposed on them, even after motor performance has returned to baseline. Estimates of this external disturbance should be relevant both now and in future laboratory settings. Estimated properties of our bodies on the other hand should always be relevant. Our model demonstrates savings, interference, spontaneous rebound and differences between adaptation to sudden and gradual disturbances. We suggest that many issues concerning savings and interference can be understood when adaptation is conditioned on the relevance of parameters.

  17. Stochastic and sensitivity analysis of shape error of inflatable antenna reflectors

    Science.gov (United States)

    San, Bingbing; Yang, Qingshan; Yin, Liwei

    2017-03-01

    Inflatable antennas are promising candidates to realize future satellite communications and space observations since they are lightweight, low-cost and small-packaged-volume. However, due to their high flexibility, inflatable reflectors are difficult to manufacture accurately, which may result in undesirable shape errors, and thus affect their performance negatively. In this paper, the stochastic characteristics of shape errors induced during manufacturing process are investigated using Latin hypercube sampling coupled with manufacture simulations. Four main random error sources are involved, including errors in membrane thickness, errors in elastic modulus of membrane, boundary deviations and pressure variations. Using regression and correlation analysis, a global sensitivity study is conducted to rank the importance of these error sources. This global sensitivity analysis is novel in that it can take into account the random variation and the interaction between error sources. Analyses are parametrically carried out with various focal-length-to-diameter ratios (F/D) and aperture sizes (D) of reflectors to investigate their effects on significance ranking of error sources. The research reveals that RMS (Root Mean Square) of shape error is a random quantity with an exponent probability distribution and features great dispersion; with the increase of F/D and D, both mean value and standard deviation of shape errors are increased; in the proposed range, the significance ranking of error sources is independent of F/D and D; boundary deviation imposes the greatest effect with a much higher weight than the others; pressure variation ranks the second; error in thickness and elastic modulus of membrane ranks the last with very close sensitivities to pressure variation. Finally, suggestions are given for the control of the shape accuracy of reflectors and allowable values of error sources are proposed from the perspective of reliability.

  18. Comparison between calorimeter and HLNC errors

    International Nuclear Information System (INIS)

    Goldman, A.S.; De Ridder, P.; Laszlo, G.

    1991-01-01

    This paper summarizes an error analysis that compares systematic and random errors of total plutonium mass estimated for high-level neutron coincidence counter (HLNC) and calorimeter measurements. This task was part of an International Atomic Energy Agency (IAEA) study on the comparison of the two instruments to determine if HLNC measurement errors met IAEA standards and if the calorimeter gave ''significantly'' better precision. Our analysis was based on propagation of error models that contained all known sources of errors including uncertainties associated with plutonium isotopic measurements. 5 refs., 2 tabs

  19. Error analysis of satellite attitude determination using a vision-based approach

    Science.gov (United States)

    Carozza, Ludovico; Bevilacqua, Alessandro

    2013-09-01

    Improvements in communication and processing technologies have opened the doors to exploit on-board cameras to compute objects' spatial attitude using only the visual information from sequences of remote sensed images. The strategies and the algorithmic approach used to extract such information affect the estimation accuracy of the three-axis orientation of the object. This work presents a method for analyzing the most relevant error sources, including numerical ones, possible drift effects and their influence on the overall accuracy, referring to vision-based approaches. The method in particular focuses on the analysis of the image registration algorithm, carried out through on-purpose simulations. The overall accuracy has been assessed on a challenging case study, for which accuracy represents the fundamental requirement. In particular, attitude determination has been analyzed for small satellites, by comparing theoretical findings to metric results from simulations on realistic ground-truth data. Significant laboratory experiments, using a numerical control unit, have further confirmed the outcome. We believe that our analysis approach, as well as our findings in terms of error characterization, can be useful at proof-of-concept design and planning levels, since they emphasize the main sources of error for visual based approaches employed for satellite attitude estimation. Nevertheless, the approach we present is also of general interest for all the affine applicative domains which require an accurate estimation of three-dimensional orientation parameters (i.e., robotics, airborne stabilization).

  20. Learning curves, taking instructions, and patient safety: using a theoretical domains framework in an interview study to investigate prescribing errors among trainee doctors

    Directory of Open Access Journals (Sweden)

    Duncan Eilidh M

    2012-09-01

    Full Text Available Abstract Background Prescribing errors are a major source of morbidity and mortality and represent a significant patient safety concern. Evidence suggests that trainee doctors are responsible for most prescribing errors. Understanding the factors that influence prescribing behavior may lead to effective interventions to reduce errors. Existing investigations of prescribing errors have been based on Human Error Theory but not on other relevant behavioral theories. The aim of this study was to apply a broad theory-based approach using the Theoretical Domains Framework (TDF to investigate prescribing in the hospital context among a sample of trainee doctors. Method Semistructured interviews, based on 12 theoretical domains, were conducted with 22 trainee doctors to explore views, opinions, and experiences of prescribing and prescribing errors. Content analysis was conducted, followed by applying relevance criteria and a novel stage of critical appraisal, to identify which theoretical domains could be targeted in interventions to improve prescribing. Results Seven theoretical domains met the criteria of relevance: “social professional role and identity,” “environmental context and resources,” “social influences,” “knowledge,” “skills,” “memory, attention, and decision making,” and “behavioral regulation.” From critical appraisal of the interview data, “beliefs about consequences” and “beliefs about capabilities” were also identified as potentially important domains. Interrelationships between domains were evident. Additionally, the data supported theoretical elaboration of the domain behavioral regulation. Conclusions In this investigation of hospital-based prescribing, participants’ attributions about causes of errors were used to identify domains that could be targeted in interventions to improve prescribing. In a departure from previous TDF practice, critical appraisal was used to identify additional domains

  1. Learning curves, taking instructions, and patient safety: using a theoretical domains framework in an interview study to investigate prescribing errors among trainee doctors.

    Science.gov (United States)

    Duncan, Eilidh M; Francis, Jill J; Johnston, Marie; Davey, Peter; Maxwell, Simon; McKay, Gerard A; McLay, James; Ross, Sarah; Ryan, Cristín; Webb, David J; Bond, Christine

    2012-09-11

    Prescribing errors are a major source of morbidity and mortality and represent a significant patient safety concern. Evidence suggests that trainee doctors are responsible for most prescribing errors. Understanding the factors that influence prescribing behavior may lead to effective interventions to reduce errors. Existing investigations of prescribing errors have been based on Human Error Theory but not on other relevant behavioral theories. The aim of this study was to apply a broad theory-based approach using the Theoretical Domains Framework (TDF) to investigate prescribing in the hospital context among a sample of trainee doctors. Semistructured interviews, based on 12 theoretical domains, were conducted with 22 trainee doctors to explore views, opinions, and experiences of prescribing and prescribing errors. Content analysis was conducted, followed by applying relevance criteria and a novel stage of critical appraisal, to identify which theoretical domains could be targeted in interventions to improve prescribing. Seven theoretical domains met the criteria of relevance: "social professional role and identity," "environmental context and resources," "social influences," "knowledge," "skills," "memory, attention, and decision making," and "behavioral regulation." From critical appraisal of the interview data, "beliefs about consequences" and "beliefs about capabilities" were also identified as potentially important domains. Interrelationships between domains were evident. Additionally, the data supported theoretical elaboration of the domain behavioral regulation. In this investigation of hospital-based prescribing, participants' attributions about causes of errors were used to identify domains that could be targeted in interventions to improve prescribing. In a departure from previous TDF practice, critical appraisal was used to identify additional domains that should also be targeted, despite participants' perceptions that they were not relevant to

  2. Sensor Interaction as a Source of the Electromagnetic Field Measurement Error

    Directory of Open Access Journals (Sweden)

    Hartansky R.

    2014-12-01

    Full Text Available The article deals with analytical calculation and numerical simulation of interactive influence of electromagnetic sensors. Sensors are components of field probe, whereby their interactive influence causes the measuring error. Electromagnetic field probe contains three mutually perpendicular spaced sensors in order to measure the vector of electrical field. Error of sensors is enumerated with dependence on interactive position of sensors. Based on that, proposed were recommendations for electromagnetic field probe construction to minimize the sensor interaction and measuring error.

  3. Computational Benchmark Calculations Relevant to the Neutronic Design of the Spallation Neutron Source (SNS)

    International Nuclear Information System (INIS)

    Gallmeier, F.X.; Glasgow, D.C.; Jerde, E.A.; Johnson, J.O.; Yugo, J.J.

    1999-01-01

    The Spallation Neutron Source (SNS) will provide an intense source of low-energy neutrons for experimental use. The low-energy neutrons are produced by the interaction of a high-energy (1.0 GeV) proton beam on a mercury (Hg) target and slowed down in liquid hydrogen or light water moderators. Computer codes and computational techniques are being benchmarked against relevant experimental data to validate and verify the tools being used to predict the performance of the SNS. The LAHET Code System (LCS), which includes LAHET, HTAPE ad HMCNP (a modified version of MCNP version 3b), have been applied to the analysis of experiments that were conducted in the Alternating Gradient Synchrotron (AGS) facility at Brookhaven National Laboratory (BNL). In the AGS experiments, foils of various materials were placed around a mercury-filled stainless steel cylinder, which was bombarded with protons at 1.6 GeV. Neutrons created in the mercury target, activated the foils. Activities of the relevant isotopes were accurately measured and compared with calculated predictions. Measurements at BNL were provided in part by collaborating scientists from JAERI as part of the AGS Spallation Target Experiment (ASTE) collaboration. To date, calculations have shown good agreement with measurements

  4. NDE errors and their propagation in sizing and growth estimates

    International Nuclear Information System (INIS)

    Horn, D.; Obrutsky, L.; Lakhan, R.

    2009-01-01

    The accuracy attributed to eddy current flaw sizing determines the amount of conservativism required in setting tube-plugging limits. Several sources of error contribute to the uncertainty of the measurements, and the way in which these errors propagate and interact affects the overall accuracy of the flaw size and flaw growth estimates. An example of this calculation is the determination of an upper limit on flaw growth over one operating period, based on the difference between two measurements. Signal-to-signal comparison involves a variety of human, instrumental, and environmental error sources; of these, some propagate additively and some multiplicatively. In a difference calculation, specific errors in the first measurement may be correlated with the corresponding errors in the second; others may be independent. Each of the error sources needs to be identified and quantified individually, as does its distribution in the field data. A mathematical framework for the propagation of the errors can then be used to assess the sensitivity of the overall uncertainty to each individual error component. This paper quantifies error sources affecting eddy current sizing estimates and presents analytical expressions developed for their effect on depth estimates. A simple case study is used to model the analysis process. For each error source, the distribution of the field data was assessed and propagated through the analytical expressions. While the sizing error obtained was consistent with earlier estimates and with deviations from ultrasonic depth measurements, the error on growth was calculated as significantly smaller than that obtained assuming uncorrelated errors. An interesting result of the sensitivity analysis in the present case study is the quantification of the error reduction available from post-measurement compensation of magnetite effects. With the absolute and difference error equations, variance-covariance matrices, and partial derivatives developed in

  5. Identifying systematic DFT errors in catalytic reactions

    DEFF Research Database (Denmark)

    Christensen, Rune; Hansen, Heine Anton; Vegge, Tejs

    2015-01-01

    Using CO2 reduction reactions as examples, we present a widely applicable method for identifying the main source of errors in density functional theory (DFT) calculations. The method has broad applications for error correction in DFT calculations in general, as it relies on the dependence...... of the applied exchange–correlation functional on the reaction energies rather than on errors versus the experimental data. As a result, improved energy corrections can now be determined for both gas phase and adsorbed reaction species, particularly interesting within heterogeneous catalysis. We show...... that for the CO2 reduction reactions, the main source of error is associated with the C[double bond, length as m-dash]O bonds and not the typically energy corrected OCO backbone....

  6. Error threshold ghosts in a simple hypercycle with error prone self-replication

    International Nuclear Information System (INIS)

    Sardanyes, Josep

    2008-01-01

    A delayed transition because of mutation processes is shown to happen in a simple hypercycle composed by two indistinguishable molecular species with error prone self-replication. The appearance of a ghost near the hypercycle error threshold causes a delay in the extinction and thus in the loss of information of the mutually catalytic replicators, in a kind of information memory. The extinction time, τ, scales near bifurcation threshold according to the universal square-root scaling law i.e. τ ∼ (Q hc - Q) -1/2 , typical of dynamical systems close to a saddle-node bifurcation. Here, Q hc represents the bifurcation point named hypercycle error threshold, involved in the change among the asymptotic stability phase and the so-called Random Replication State (RRS) of the hypercycle; and the parameter Q is the replication quality factor. The ghost involves a longer transient towards extinction once the saddle-node bifurcation has occurred, being extremely long near the bifurcation threshold. The role of this dynamical effect is expected to be relevant in fluctuating environments. Such a phenomenon should also be found in larger hypercycles when considering the hypercycle species in competition with their error tail. The implications of the ghost in the survival and evolution of error prone self-replicating molecules with hypercyclic organization are discussed

  7. A Comparative Study on Error Analysis

    DEFF Research Database (Denmark)

    Wu, Xiaoli; Zhang, Chun

    2015-01-01

    Title: A Comparative Study on Error Analysis Subtitle: - Belgian (L1) and Danish (L1) learners’ use of Chinese (L2) comparative sentences in written production Xiaoli Wu, Chun Zhang Abstract: Making errors is an inevitable and necessary part of learning. The collection, classification and analysis...... the occurrence of errors either in linguistic or pedagogical terms. The purpose of the current study is to demonstrate the theoretical and practical relevance of error analysis approach in CFL by investigating two cases - (1) Belgian (L1) learners’ use of Chinese (L2) comparative sentences in written production...... of errors in the written and spoken production of L2 learners has a long tradition in L2 pedagogy. Yet, in teaching and learning Chinese as a foreign language (CFL), only handful studies have been made either to define the ‘error’ in a pedagogically insightful way or to empirically investigate...

  8. Reducing errors in aircraft atmospheric inversion estimates of point-source emissions: the Aliso Canyon natural gas leak as a natural tracer experiment

    Science.gov (United States)

    Gourdji, S. M.; Yadav, V.; Karion, A.; Mueller, K. L.; Conley, S.; Ryerson, T.; Nehrkorn, T.; Kort, E. A.

    2018-04-01

    Urban greenhouse gas (GHG) flux estimation with atmospheric measurements and modeling, i.e. the ‘top-down’ approach, can potentially support GHG emission reduction policies by assessing trends in surface fluxes and detecting anomalies from bottom-up inventories. Aircraft-collected GHG observations also have the potential to help quantify point-source emissions that may not be adequately sampled by fixed surface tower-based atmospheric observing systems. Here, we estimate CH4 emissions from a known point source, the Aliso Canyon natural gas leak in Los Angeles, CA from October 2015–February 2016, using atmospheric inverse models with airborne CH4 observations from twelve flights ≈4 km downwind of the leak and surface sensitivities from a mesoscale atmospheric transport model. This leak event has been well-quantified previously using various methods by the California Air Resources Board, thereby providing high confidence in the mass-balance leak rate estimates of (Conley et al 2016), used here for comparison to inversion results. Inversions with an optimal setup are shown to provide estimates of the leak magnitude, on average, within a third of the mass balance values, with remaining errors in estimated leak rates predominantly explained by modeled wind speed errors of up to 10 m s‑1, quantified by comparing airborne meteorological observations with modeled values along the flight track. An inversion setup using scaled observational wind speed errors in the model-data mismatch covariance matrix is shown to significantly reduce the influence of transport model errors on spatial patterns and estimated leak rates from the inversions. In sum, this study takes advantage of a natural tracer release experiment (i.e. the Aliso Canyon natural gas leak) to identify effective approaches for reducing the influence of transport model error on atmospheric inversions of point-source emissions, while suggesting future potential for integrating surface tower and

  9. First order error corrections in common introductory physics experiments

    Science.gov (United States)

    Beckey, Jacob; Baker, Andrew; Aravind, Vasudeva; Clarion Team

    As a part of introductory physics courses, students perform different standard lab experiments. Almost all of these experiments are prone to errors owing to factors like friction, misalignment of equipment, air drag, etc. Usually these types of errors are ignored by students and not much thought is paid to the source of these errors. However, paying attention to these factors that give rise to errors help students make better physics models and understand physical phenomena behind experiments in more detail. In this work, we explore common causes of errors in introductory physics experiment and suggest changes that will mitigate the errors, or suggest models that take the sources of these errors into consideration. This work helps students build better and refined physical models and understand physics concepts in greater detail. We thank Clarion University undergraduate student grant for financial support involving this project.

  10. Stochastic goal-oriented error estimation with memory

    Science.gov (United States)

    Ackmann, Jan; Marotzke, Jochem; Korn, Peter

    2017-11-01

    We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.

  11. Error and its meaning in forensic science.

    Science.gov (United States)

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes. © 2013 American Academy of Forensic Sciences.

  12. Medication errors detected in non-traditional databases

    DEFF Research Database (Denmark)

    Perregaard, Helene; Aronson, Jeffrey K; Dalhoff, Kim

    2015-01-01

    AIMS: We have looked for medication errors involving the use of low-dose methotrexate, by extracting information from Danish sources other than traditional pharmacovigilance databases. We used the data to establish the relative frequencies of different types of errors. METHODS: We searched four...... errors, whereas knowledge-based errors more often resulted in near misses. CONCLUSIONS: The medication errors in this survey were most often action-based (50%) and knowledge-based (34%), suggesting that greater attention should be paid to education and surveillance of medical personnel who prescribe...

  13. Content Validity of a Tool Measuring Medication Errors.

    Science.gov (United States)

    Tabassum, Nishat; Allana, Saleema; Saeed, Tanveer; Dias, Jacqueline Maria

    2015-08-01

    The objective of this study was to determine content and face validity of a tool measuring medication errors among nursing students in baccalaureate nursing education. Data was collected from the Aga Khan University School of Nursing and Midwifery (AKUSoNaM), Karachi, from March to August 2014. The tool was developed utilizing literature and the expertise of the team members, expert in different areas. The developed tool was then sent to five experts from all over Karachi for ensuring the content validity of the tool, which was measured on relevance and clarity of the questions. The Scale Content Validity Index (S-CVI) for clarity and relevance of the questions was found to be 0.94 and 0.98, respectively. The tool measuring medication errors has an excellent content validity. This tool should be used for future studies on medication errors, with different study populations such as medical students, doctors, and nurses.

  14. AN ANALYSIS OF ACEHNESE EFL STUDENTS’ GRAMMATICAL ERRORS IN WRITING RECOUNT TEXTS

    Directory of Open Access Journals (Sweden)

    Qudwatin Nisak M. Isa

    2017-11-01

    Full Text Available This study aims at finding empirical evidence of the most common types of grammatical errors and sources of errors in recount texts written by the first-year students of SMAS Babul Maghfirah, Aceh Besar. The subject of the study was a collection of students’ personal writing documents of recount texts about their lives experience. The students’ recount texts were analyzed by referring to Betty S. Azar classification and Richard’s theory on sources of errors. The findings showed that the total number of error is 436. Two frequent types of grammatical errors were Verb Tense and Word Choice. The major sources of error were Intralingual Error, Interference Error and Developmental Error respectively. Furthermore, the findings suggest that it is necessary for EFL teachers to apply appropriate techniques and strategies in teaching recount texts, which focus on past tense and language features of the text in order to reduce the possible errors to be made by the students.

  15. Human error as a source of disturbances in Swedish nuclear power plants

    International Nuclear Information System (INIS)

    Sokolowski, E.

    1985-01-01

    Events involving human errors at the Swedish nuclear power plants are registered and periodically analyzed. The philosophy behind the scheme for data collection and analysis is discussed. Human errors cause about 10% of the disturbances registered. Only a small part of these errors are committed by operators in the control room. These and other findings differ from those in other countries. Possible reasons are put forward

  16. Perceived relevance and information needs regarding food topics and preferred information sources among Dutch adults: results of a quantitative consumer study

    NARCIS (Netherlands)

    Dillen, van S.M.E.; Hiddink, G.J.; Koelen, M.A.; Graaf, de C.; Woerkum, van C.M.J.

    2004-01-01

    Objective: For more effective nutrition communication, it is crucial to identify sources from which consumers seek information. Our purpose was to assess perceived relevance and information needs regarding food topics, and preferred information sources by means of quantitative consumer research.

  17. Dual processing and diagnostic errors.

    Science.gov (United States)

    Norman, Geoff

    2009-09-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical, conscious, and conceptual process, called System 2. Exemplar theories of categorization propose that many category decisions in everyday life are made by unconscious matching to a particular example in memory, and these remain available and retrievable individually. I then review studies of clinical reasoning based on these theories, and show that the two processes are equally effective; System 1, despite its reliance in idiosyncratic, individual experience, is no more prone to cognitive bias or diagnostic error than System 2. Further, I review evidence that instructions directed at encouraging the clinician to explicitly use both strategies can lead to consistent reduction in error rates.

  18. A Comprehensive Radial Velocity Error Budget for Next Generation Doppler Spectrometers

    Science.gov (United States)

    Halverson, Samuel; Ryan, Terrien; Mahadevan, Suvrath; Roy, Arpita; Bender, Chad; Stefansson, Guomundur Kari; Monson, Andrew; Levi, Eric; Hearty, Fred; Blake, Cullen; hide

    2016-01-01

    We describe a detailed radial velocity error budget for the NASA-NSF Extreme Precision Doppler Spectrometer instrument concept NEID (NN-explore Exoplanet Investigations with Doppler spectroscopy). Such an instrument performance budget is a necessity for both identifying the variety of noise sources currently limiting Doppler measurements, and estimating the achievable performance of next generation exoplanet hunting Doppler spectrometers. For these instruments, no single source of instrumental error is expected to set the overall measurement floor. Rather, the overall instrumental measurement precision is set by the contribution of many individual error sources. We use a combination of numerical simulations, educated estimates based on published materials, extrapolations of physical models, results from laboratory measurements of spectroscopic subsystems, and informed upper limits for a variety of error sources to identify likely sources of systematic error and construct our global instrument performance error budget. While natively focused on the performance of the NEID instrument, this modular performance budget is immediately adaptable to a number of current and future instruments. Such an approach is an important step in charting a path towards improving Doppler measurement precisions to the levels necessary for discovering Earth-like planets.

  19. Error Modeling and Design Optimization of Parallel Manipulators

    DEFF Research Database (Denmark)

    Wu, Guanglei

    /backlash, manufacturing and assembly errors and joint clearances. From the error prediction model, the distributions of the pose errors due to joint clearances are mapped within its constant-orientation workspace and the correctness of the developed model is validated experimentally. ix Additionally, using the screw......, dynamic modeling etc. Next, the rst-order dierential equation of the kinematic closure equation of planar parallel manipulator is obtained to develop its error model both in Polar and Cartesian coordinate systems. The established error model contains the error sources of actuation error...

  20. Terrestrial neutron-induced soft errors in advanced memory devices

    CERN Document Server

    Nakamura, Takashi; Ibe, Eishi; Yahagi, Yasuo; Kameyama, Hideaki

    2008-01-01

    Terrestrial neutron-induced soft errors in semiconductor memory devices are currently a major concern in reliability issues. Understanding the mechanism and quantifying soft-error rates are primarily crucial for the design and quality assurance of semiconductor memory devices. This book covers the relevant up-to-date topics in terrestrial neutron-induced soft errors, and aims to provide succinct knowledge on neutron-induced soft errors to the readers by presenting several valuable and unique features. Sample Chapter(s). Chapter 1: Introduction (238 KB). Table A.30 mentioned in Appendix A.6 on

  1. Medication error detection in two major teaching hospitals: What are the types of errors?

    Directory of Open Access Journals (Sweden)

    Fatemeh Saghafi

    2014-01-01

    Full Text Available Background: Increasing number of reports on medication errors and relevant subsequent damages, especially in medical centers has become a growing concern for patient safety in recent decades. Patient safety and in particular, medication safety is a major concern and challenge for health care professionals around the world. Our prospective study was designed to detect prescribing, transcribing, dispensing, and administering medication errors in two major university hospitals. Materials and Methods: After choosing 20 similar hospital wards in two large teaching hospitals in the city of Isfahan, Iran, the sequence was randomly selected. Diagrams for drug distribution were drawn by the help of pharmacy directors. Direct observation technique was chosen as the method for detecting the errors. A total of 50 doses were studied in each ward to detect prescribing, transcribing and administering errors in each ward. The dispensing error was studied on 1000 doses dispensed in each hospital pharmacy. Results: A total of 8162 number of doses of medications were studied during the four stages, of which 8000 were complete data to be analyzed. 73% of prescribing orders were incomplete and did not have all six parameters (name, dosage form, dose and measuring unit, administration route, and intervals of administration. We found 15% transcribing errors. One-third of administration of medications on average was erroneous in both hospitals. Dispensing errors ranged between 1.4% and 2.2%. Conclusion: Although prescribing and administrating compromise most of the medication errors, improvements are needed in all four stages with regard to medication errors. Clear guidelines must be written and executed in both hospitals to reduce the incidence of medication errors.

  2. Numerical study of the systematic error in Monte Carlo schemes for semiconductors

    Energy Technology Data Exchange (ETDEWEB)

    Muscato, Orazio [Univ. degli Studi di Catania (Italy). Dipt. di Matematica e Informatica; Di Stefano, Vincenza [Univ. degli Studi di Messina (Italy). Dipt. di Matematica; Wagner, Wolfgang [Weierstrass-Institut fuer Angewandte Analysis und Stochastik (WIAS) im Forschungsverbund Berlin e.V. (Germany)

    2008-07-01

    The paper studies the convergence behavior of Monte Carlo schemes for semiconductors. A detailed analysis of the systematic error with respect to numerical parameters is performed. Different sources of systematic error are pointed out and illustrated in a spatially one-dimensional test case. The error with respect to the number of simulation particles occurs during the calculation of the internal electric field. The time step error, which is related to the splitting of transport and electric field calculations, vanishes sufficiently fast. The error due to the approximation of the trajectories of particles depends on the ODE solver used in the algorithm. It is negligible compared to the other sources of time step error, when a second order Runge-Kutta solver is used. The error related to the approximate scattering mechanism is the most significant source of error with respect to the time step. (orig.)

  3. Radon measurements-discussion of error estimates for selected methods

    International Nuclear Information System (INIS)

    Zhukovsky, Michael; Onischenko, Alexandra; Bastrikov, Vladislav

    2010-01-01

    The main sources of uncertainties for grab sampling, short-term (charcoal canisters) and long term (track detectors) measurements are: systematic bias of reference equipment; random Poisson and non-Poisson errors during calibration; random Poisson and non-Poisson errors during measurements. The origins of non-Poisson random errors during calibration are different for different kinds of instrumental measurements. The main sources of uncertainties for retrospective measurements conducted by surface traps techniques can be divided in two groups: errors of surface 210 Pb ( 210 Po) activity measurements and uncertainties of transfer from 210 Pb surface activity in glass objects to average radon concentration during this object exposure. It's shown that total measurement error of surface trap retrospective technique can be decreased to 35%.

  4. Medical Error and Moral Luck.

    Science.gov (United States)

    Hubbeling, Dieneke

    2016-09-01

    This paper addresses the concept of moral luck. Moral luck is discussed in the context of medical error, especially an error of omission that occurs frequently, but only rarely has adverse consequences. As an example, a failure to compare the label on a syringe with the drug chart results in the wrong medication being administered and the patient dies. However, this error may have previously occurred many times with no tragic consequences. Discussions on moral luck can highlight conflicting intuitions. Should perpetrators receive a harsher punishment because of an adverse outcome, or should they be dealt with in the same way as colleagues who have acted similarly, but with no adverse effects? An additional element to the discussion, specifically with medical errors, is that according to the evidence currently available, punishing individual practitioners does not seem to be effective in preventing future errors. The following discussion, using relevant philosophical and empirical evidence, posits a possible solution for the moral luck conundrum in the context of medical error: namely, making a distinction between the duty to make amends and assigning blame. Blame should be assigned on the basis of actual behavior, while the duty to make amends is dependent on the outcome.

  5. An upper bound on the number of errors corrected by a convolutional code

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2000-01-01

    The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length.......The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length....

  6. Quantifying the Contributions of Environmental Parameters to Ceres Surface Net Radiation Error in China

    Science.gov (United States)

    Pan, X.; Yang, Y.; Liu, Y.; Fan, X.; Shan, L.; Zhang, X.

    2018-04-01

    Error source analyses are critical for the satellite-retrieved surface net radiation (Rn) products. In this study, we evaluate the Rn error sources in the Clouds and the Earth's Radiant Energy System (CERES) project at 43 sites from July in 2007 to December in 2007 in China. The results show that cloud fraction (CF), land surface temperature (LST), atmospheric temperature (AT) and algorithm error dominate the Rn error, with error contributions of -20, 15, 10 and 10 W/m2 (net shortwave (NSW)/longwave (NLW) radiation), respectively. For NSW, the dominant error source is algorithm error (more than 10 W/m2), particularly in spring and summer with abundant cloud. For NLW, due to the high sensitivity of algorithm and large LST/CF error, LST and CF are the largest error sources, especially in northern China. The AT influences the NLW error large in southern China because of the large AT error in there. The total precipitable water has weak influence on Rn error even with the high sensitivity of algorithm. In order to improve Rn quality, CF and LST (AT) error in northern (southern) China should be decreased.

  7. Use and perceptions of information among family physicians: sources considered accessible, relevant, and reliable.

    Science.gov (United States)

    Kosteniuk, Julie G; Morgan, Debra G; D'Arcy, Carl K

    2013-01-01

    The research determined (1) the information sources that family physicians (FPs) most commonly use to update their general medical knowledge and to make specific clinical decisions, and (2) the information sources FPs found to be most physically accessible, intellectually accessible (easy to understand), reliable (trustworthy), and relevant to their needs. A cross-sectional postal survey of 792 FPs and locum tenens, in full-time or part-time medical practice, currently practicing or on leave of absence in the Canadian province of Saskatchewan was conducted during the period of January to April 2008. Of 666 eligible physicians, 331 completed and returned surveys, resulting in a response rate of 49.7% (331/666). Medical textbooks and colleagues in the main patient care setting were the top 2 sources for the purpose of making specific clinical decisions. Medical textbooks were most frequently considered by FPs to be reliable (trustworthy), and colleagues in the main patient care setting were most physically accessible (easy to access). When making specific clinical decisions, FPs were most likely to use information from sources that they considered to be reliable and generally physically accessible, suggesting that FPs can best be supported by facilitating easy and convenient access to high-quality information.

  8. Error related negativity and multi-source interference task in children with attention deficit hyperactivity disorder-combined type

    Directory of Open Access Journals (Sweden)

    Rosana Huerta-Albarrán

    2015-03-01

    Full Text Available Objective To compare performance of children with attention deficit hyperactivity disorders-combined (ADHD-C type with control children in multi-source interference task (MSIT evaluated by means of error related negativity (ERN. Method We studied 12 children with ADHD-C type with a median age of 7 years, control children were age- and gender-matched. Children performed MSIT and simultaneous recording of ERN. Results We found no differences in MSIT parameters among groups. We found no differences in ERN variables between groups. We found a significant association of ERN amplitude with MSIT in children with ADHD-C type. Some correlation went in positive direction (frequency of hits and MSIT amplitude, and others in negative direction (frequency of errors and RT in MSIT. Conclusion Children with ADHD-C type exhibited a significant association between ERN amplitude with MSIT. These results underline participation of a cingulo-fronto-parietal network and could help in the comprehension of pathophysiological mechanisms of ADHD.

  9. IFMIF, a fusion relevant neutron source for material irradiation current status

    International Nuclear Information System (INIS)

    Knaster, J.; Chel, S.; Fischer, U.; Groeschel, F.; Heidinger, R.; Ibarra, A.; Micciche, G.; Möslang, A.; Sugimoto, M.; Wakai, E.

    2014-01-01

    The d-Li based International Fusion Materials Irradiation Facility (IFMIF) will provide a high neutron intensity neutron source with a suitable neutron spectrum to fulfil the requirements for testing and qualifying fusion materials under fusion reactor relevant irradiation conditions. The IFMIF project, presently in its Engineering Validation and Engineering Design Activities (EVEDA) phase under the Broader Approach (BA) Agreement between Japan Government and EURATOM, aims at the construction and testing of the most challenging facility sub-systems, such as the first accelerator stage, the Li target and loop, and irradiation test modules, as well as the design of the entire facility, thus to be ready for the IFMIF construction with a clear understanding of schedule and cost at the termination of the BA mid-2017. The paper reviews the IFMIF facility and its principles, and reports on the status of the EVEDA activities and achievements

  10. Propagation of angular errors in two-axis rotation systems

    Science.gov (United States)

    Torrington, Geoffrey K.

    2003-10-01

    Two-Axis Rotation Systems, or "goniometers," are used in diverse applications including telescope pointing, automotive headlamp testing, and display testing. There are three basic configurations in which a goniometer can be built depending on the orientation and order of the stages. Each configuration has a governing set of equations which convert motion between the system "native" coordinates to other base systems, such as direction cosines, optical field angles, or spherical-polar coordinates. In their simplest form, these equations neglect errors present in real systems. In this paper, a statistical treatment of error source propagation is developed which uses only tolerance data, such as can be obtained from the system mechanical drawings prior to fabrication. It is shown that certain error sources are fully correctable, partially correctable, or uncorrectable, depending upon the goniometer configuration and zeroing technique. The system error budget can be described by a root-sum-of-squares technique with weighting factors describing the sensitivity of each error source. This paper tabulates weighting factors at 67% (k=1) and 95% (k=2) confidence for various levels of maximum travel for each goniometer configuration. As a practical example, this paper works through an error budget used for the procurement of a system at Sandia National Laboratories.

  11. Assessing error sources for Landsat time series analysis for tropical test sites in Viet Nam and Ethiopia

    Science.gov (United States)

    Schultz, Michael; Verbesselt, Jan; Herold, Martin; Avitabile, Valerio

    2013-10-01

    Researchers who use remotely sensed data can spend half of their total effort analysing prior data. If this data preprocessing does not match the application, this time spent on data analysis can increase considerably and can lead to inaccuracies. Despite the existence of a number of methods for pre-processing Landsat time series, each method has shortcomings, particularly for mapping forest changes under varying illumination, data availability and atmospheric conditions. Based on the requirements of mapping forest changes as defined by the United Nations (UN) Reducing Emissions from Forest Degradation and Deforestation (REDD) program, the accurate reporting of the spatio-temporal properties of these changes is necessary. We compared the impact of three fundamentally different radiometric preprocessing techniques Moderate Resolution Atmospheric TRANsmission (MODTRAN), Second Simulation of a Satellite Signal in the Solar Spectrum (6S) and simple Dark Object Subtraction (DOS) on mapping forest changes using Landsat time series data. A modification of Breaks For Additive Season and Trend (BFAST) monitor was used to jointly map the spatial and temporal agreement of forest changes at test sites in Ethiopia and Viet Nam. The suitability of the pre-processing methods for the occurring forest change drivers was assessed using recently captured Ground Truth and high resolution data (1000 points). A method for creating robust generic forest maps used for the sampling design is presented. An assessment of error sources has been performed identifying haze as a major source for time series analysis commission error.

  12. VOLUMETRIC ERROR COMPENSATION IN FIVE-AXIS CNC MACHINING CENTER THROUGH KINEMATICS MODELING OF GEOMETRIC ERROR

    Directory of Open Access Journals (Sweden)

    Pooyan Vahidi Pashsaki

    2016-06-01

    Full Text Available Accuracy of a five-axis CNC machine tool is affected by a vast number of error sources. This paper investigates volumetric error modeling and its compensation to the basis for creation of new tool path for improvement of work pieces accuracy. The volumetric error model of a five-axis machine tool with the configuration RTTTR (tilting head B-axis and rotary table in work piece side A΄ was set up taking into consideration rigid body kinematics and homogeneous transformation matrix, in which 43 error components are included. Volumetric error comprises 43 error components that can separately reduce geometrical and dimensional accuracy of work pieces. The machining accuracy of work piece is guaranteed due to the position of the cutting tool center point (TCP relative to the work piece. The cutting tool is deviated from its ideal position relative to the work piece and machining error is experienced. For compensation process detection of the present tool path and analysis of the RTTTR five-axis CNC machine tools geometrical error, translating current position of component to compensated positions using the Kinematics error model, converting newly created component to new tool paths using the compensation algorithms and finally editing old G-codes using G-code generator algorithm have been employed.

  13. Quantum error correction for beginners

    International Nuclear Information System (INIS)

    Devitt, Simon J; Nemoto, Kae; Munro, William J

    2013-01-01

    Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future. (review article)

  14. Evaluation and Error Analysis for a Solar thermal Receiver

    Energy Technology Data Exchange (ETDEWEB)

    Pfander, M.

    2001-07-01

    In the following study a complete balance over the REFOS receiver module, mounted on the tower power plant CESA-1 at the Plataforma Solar de Almeria (PSA), is carried out. Additionally an error inspection of the various measurement techniques used in the REFOS project is made. Especially the flux measurement system Prohermes that is used to determine the total entry power of the receiver module and known as a major error source is analysed in detail. Simulations and experiments on the particular instruments are used to determine and quantify possible error sources. After discovering the origin of the errors they are reduced and included in the error calculation. the ultimate result is presented as an overall efficiency of the receiver module in dependence on the flux density at the receiver module's entry plane and the receiver operating temperature. (Author) 26 refs.

  15. Evaluation and Error Analysis for a Solar Thermal Receiver

    International Nuclear Information System (INIS)

    Pfander, M.

    2001-01-01

    In the following study a complete balance over the REFOS receiver module, mounted on the tower power plant CESA-1 at the Plataforma Solar de Almeria (PSA), is carried out. Additionally an error inspection of the various measurement techniques used in the REFOS project is made. Especially the flux measurement system Pro hermes that is used to determine the total entry power of the receiver module and known as a major error source is analysed in detail. Simulations and experiments on the particular instruments are used to determine and quantify possible error sources. After discovering the origin of the errors they are reduced and included in the error calculation. The ultimate result is presented as an overall efficiency of the receiver module in dependence on the flux density at the receiver modules entry plane and the receiver operating temperature. (Author) 26 refs

  16. Refractive error assessment: influence of different optical elements and current limits of biometric techniques.

    Science.gov (United States)

    Ribeiro, Filomena; Castanheira-Dinis, Antonio; Dias, Joao Mendanha

    2013-03-01

    To identify and quantify sources of error on refractive assessment using exact ray tracing. The Liou-Brennan eye model was used as a starting point and its parameters were varied individually within a physiological range. The contribution of each parameter to refractive error was assessed using linear regression curve fits and Gaussian error propagation analysis. A MonteCarlo analysis quantified the limits of refractive assessment given by current biometric measurements. Vitreous and aqueous refractive indices are the elements that influence refractive error the most, with a 1% change of each parameter contributing to a refractive error variation of +1.60 and -1.30 diopters (D), respectively. In the phakic eye, axial length measurements taken by ultrasound (vitreous chamber depth, lens thickness, and anterior chamber depth [ACD]) were the most sensitive to biometric errors, with a contribution to the refractive error of 62.7%, 14.2%, and 10.7%, respectively. In the pseudophakic eye, vitreous chamber depth showed the highest contribution at 53.7%, followed by postoperative ACD at 35.7%. When optic measurements were considered, postoperative ACD was the most important contributor, followed by anterior corneal surface and its asphericity. A MonteCarlo simulation showed that current limits of refractive assessment are 0.26 and 0.28 D for the phakic and pseudophakic eye, respectively. The most relevant optical elements either do not have available measurement instruments or the existing instruments still need to improve their accuracy. Ray tracing can be used as an optical assessment technique, and may be the correct path for future personalized refractive assessment. Copyright 2013, SLACK Incorporated.

  17. Engagement in Learning after Errors at Work: Enabling Conditions and Types of Engagement

    Science.gov (United States)

    Bauer, Johannes; Mulder, Regina H.

    2013-01-01

    This article addresses two research questions concerning nurses' engagement in social learning activities after errors at work. Firstly, we investigated how this engagement relates to nurses' interpretations of the error situation and perceptions of a safe team climate. The results indicate that the individual estimation of an error as relevant to…

  18. Simulator data on human error probabilities

    International Nuclear Information System (INIS)

    Kozinsky, E.J.; Guttmann, H.E.

    1982-01-01

    Analysis of operator errors on NPP simulators is being used to determine Human Error Probabilities (HEP) for task elements defined in NUREG/CR 1278. Simulator data tapes from research conducted by EPRI and ORNL are being analyzed for operator error rates. The tapes collected, using Performance Measurement System software developed for EPRI, contain a history of all operator manipulations during simulated casualties. Analysis yields a time history or Operational Sequence Diagram and a manipulation summary, both stored in computer data files. Data searches yield information on operator errors of omission and commission. This work experimentally determines HEPs for Probabilistic Risk Assessment calculations. It is the only practical experimental source of this data to date

  19. Simulator data on human error probabilities

    International Nuclear Information System (INIS)

    Kozinsky, E.J.; Guttmann, H.E.

    1981-01-01

    Analysis of operator errors on NPP simulators is being used to determine Human Error Probabilities (HEP) for task elements defined in NUREG/CR-1278. Simulator data tapes from research conducted by EPRI and ORNL are being analyzed for operator error rates. The tapes collected, using Performance Measurement System software developed for EPRI, contain a history of all operator manipulations during simulated casualties. Analysis yields a time history or Operational Sequence Diagram and a manipulation summary, both stored in computer data files. Data searches yield information on operator errors of omission and commission. This work experimentally determined HEP's for Probabilistic Risk Assessment calculations. It is the only practical experimental source of this data to date

  20. From the Lab to the real world : sources of error in UF6 gas enrichment monitoring

    International Nuclear Information System (INIS)

    Lombardi, Marcie L.

    2012-01-01

    monitors have required empty pipe measurements to accurately determine the pipe attenuation (the pipe attenuation is typically much larger than the attenuation in the gas). This dissertation reports on a method for determining the thickness of a pipe in a GCEP when obtaining an empty pipe measurement may not be feasible. This dissertation studies each of the components that may add to the final error in the enrichment measurement, and the factors that were taken into account to mitigate these issues are also detailed and tested. The use of an x-ray generator as a transmission source and the attending stability issues are addressed. Both analytical calculations and experimental measurements have been used. For completeness, some real-world analysis results from the URENCO Capenhurst enrichment plant have been included, where the final enrichment error has remained well below 1% for approximately two months

  1. Campylobacter species in animal, food, and environmental sources, and relevant testing programs in Canada.

    Science.gov (United States)

    Huang, Hongsheng; Brooks, Brian W; Lowman, Ruff; Carrillo, Catherine D

    2015-10-01

    Campylobacter species, particularly thermophilic campylobacters, have emerged as a leading cause of human foodborne gastroenteritis worldwide, with Campylobacter jejuni, Campylobacter coli, and Campylobacter lari responsible for the majority of human infections. Although most cases of campylobacteriosis are self-limiting, campylobacteriosis represents a significant public health burden. Human illness caused by infection with campylobacters has been reported across Canada since the early 1970s. Many studies have shown that dietary sources, including food, particularly raw poultry and other meat products, raw milk, and contaminated water, have contributed to outbreaks of campylobacteriosis in Canada. Campylobacter spp. have also been detected in a wide range of animal and environmental sources, including water, in Canada. The purpose of this article is to review (i) the prevalence of Campylobacter spp. in animals, food, and the environment, and (ii) the relevant testing programs in Canada with a focus on the potential links between campylobacters and human health in Canada.

  2. Wind speed errors for LIDARs and SODARs in complex terrain

    International Nuclear Information System (INIS)

    Bradley, S

    2008-01-01

    All commercial LIDARs and SODARs are monostatic and hence sample distributed volumes to construct wind vector components. We use an analytic potential flow model to estimate errors arising for a range of LIDAR and SODAR configurations on hills and escarpments. Wind speed errors peak at a height relevant to wind turbines and can be typically 20%

  3. Wind speed errors for LIDARs and SODARs in complex terrain

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, S [Physics Department, The University of Auckland, Private Bag 92019, Auckland (New Zealand) and School of Computing, Science and Engineering, University of Salford, M5 4WT (United Kingdom)], E-mail: s.bradley@auckland.ac.nz

    2008-05-01

    All commercial LIDARs and SODARs are monostatic and hence sample distributed volumes to construct wind vector components. We use an analytic potential flow model to estimate errors arising for a range of LIDAR and SODAR configurations on hills and escarpments. Wind speed errors peak at a height relevant to wind turbines and can be typically 20%.

  4. Use of WIMS-E lattice code for prediction of the transuranic source term for spent fuel dose estimation

    International Nuclear Information System (INIS)

    Schwinkendorf, K.N.

    1996-01-01

    A recent source term analysis has shown a discrepancy between ORIGEN2 transuranic isotopic production estimates and those produced with the WIMS-E lattice physics code. Excellent agreement between relevant experimental measurements and WIMS-E was shown, thus exposing an error in the cross section library used by ORIGEN2

  5. Investigation of error sources in regional inverse estimates of greenhouse gas emissions in Canada

    Science.gov (United States)

    Chan, E.; Chan, D.; Ishizawa, M.; Vogel, F.; Brioude, J.; Delcloo, A.; Wu, Y.; Jin, B.

    2015-08-01

    Inversion models can use atmospheric concentration measurements to estimate surface fluxes. This study is an evaluation of the errors in a regional flux inversion model for different provinces of Canada, Alberta (AB), Saskatchewan (SK) and Ontario (ON). Using CarbonTracker model results as the target, the synthetic data experiment analyses examined the impacts of the errors from the Bayesian optimisation method, prior flux distribution and the atmospheric transport model, as well as their interactions. The scaling factors for different sub-regions were estimated by the Markov chain Monte Carlo (MCMC) simulation and cost function minimization (CFM) methods. The CFM method results are sensitive to the relative size of the assumed model-observation mismatch and prior flux error variances. Experiment results show that the estimation error increases with the number of sub-regions using the CFM method. For the region definitions that lead to realistic flux estimates, the numbers of sub-regions for the western region of AB/SK combined and the eastern region of ON are 11 and 4 respectively. The corresponding annual flux estimation errors for the western and eastern regions using the MCMC (CFM) method are -7 and -3 % (0 and 8 %) respectively, when there is only prior flux error. The estimation errors increase to 36 and 94 % (40 and 232 %) resulting from transport model error alone. When prior and transport model errors co-exist in the inversions, the estimation errors become 5 and 85 % (29 and 201 %). This result indicates that estimation errors are dominated by the transport model error and can in fact cancel each other and propagate to the flux estimates non-linearly. In addition, it is possible for the posterior flux estimates having larger differences than the prior compared to the target fluxes, and the posterior uncertainty estimates could be unrealistically small that do not cover the target. The systematic evaluation of the different components of the inversion

  6. Savannah River Site human error data base development for nonreactor nuclear facilities

    International Nuclear Information System (INIS)

    Benhardt, H.C.; Held, J.E.; Olsen, L.M.; Vail, R.E.; Eide, S.A.

    1994-01-01

    As part of an overall effort to upgrade and streamline methodologies for safety analyses of nonreactor nuclear facilities at the Savannah River Site (SRS), a human error data base has been developed and is presented in this report. The data base fulfills several needs of risk analysts supporting safety analysis report (SAR) development. First, it provides a single source for probabilities or rates for a wide variety of human errors associated with the SRS nonreactor nuclear facilities. Second, it provides a documented basis for human error probabilities or rates. And finally, it provides actual SRS-specific human error data to support many of the error probabilities or rates. Use of a single, documented reference source for human errors, supported by SRS-specific human error data, will improve the consistency and accuracy of human error modeling by SRS risk analysts. It is envisioned that SRS risk analysts will use this report as both a guide to identifying the types of human errors that may need to be included in risk models such as fault and event trees, and as a source for human error probabilities or rates. For each human error in this report, ffime different mean probabilities or rates are presented to cover a wide range of conditions and influencing factors. The ask analysts must decide which mean value is most appropriate for each particular application. If other types of human errors are needed for the risk models, the analyst must use other sources. Finally, if human enors are dominant in the quantified risk models (based on the values obtained fmm this report), then it may be appropriate to perform detailed human reliability analyses (HRAS) for the dominant events. This document does not provide guidance for such refined HRAS; in such cases experienced human reliability analysts should be involved

  7. Investigation of systematic errors of metastable "atomic pair" number

    CERN Document Server

    Yazkov, V

    2015-01-01

    Sources of systematic errors in analysis of data, collected in 2012, are analysed. Esti- mations of systematic errors in a number of “atomic pairs” fr om metastable π + π − atoms are presented.

  8. CellBase, a comprehensive collection of RESTful web services for retrieving relevant biological information from heterogeneous sources.

    Science.gov (United States)

    Bleda, Marta; Tarraga, Joaquin; de Maria, Alejandro; Salavert, Francisco; Garcia-Alonso, Luz; Celma, Matilde; Martin, Ainoha; Dopazo, Joaquin; Medina, Ignacio

    2012-07-01

    During the past years, the advances in high-throughput technologies have produced an unprecedented growth in the number and size of repositories and databases storing relevant biological data. Today, there is more biological information than ever but, unfortunately, the current status of many of these repositories is far from being optimal. Some of the most common problems are that the information is spread out in many small databases; frequently there are different standards among repositories and some databases are no longer supported or they contain too specific and unconnected information. In addition, data size is increasingly becoming an obstacle when accessing or storing biological data. All these issues make very difficult to extract and integrate information from different sources, to analyze experiments or to access and query this information in a programmatic way. CellBase provides a solution to the growing necessity of integration by easing the access to biological data. CellBase implements a set of RESTful web services that query a centralized database containing the most relevant biological data sources. The database is hosted in our servers and is regularly updated. CellBase documentation can be found at http://docs.bioinfo.cipf.es/projects/cellbase.

  9. Connecting Organic Aerosol Climate-Relevant Properties to Chemical Mechanisms of Sources and Processing

    Energy Technology Data Exchange (ETDEWEB)

    Thornton, Joel [Univ. of Washington, Seattle, WA (United States)

    2015-01-26

    The research conducted on this project aimed to improve our understanding of secondary organic aerosol (SOA) formation in the atmosphere, and how the properties of the SOA impact climate through its size, phase state, and optical properties. The goal of this project was to demonstrate that the use of molecular composition information to mechanistically connect source apportionment and climate properties can improve the physical basis for simulation of SOA formation and properties in climate models. The research involved developing and improving methods to provide online measurements of the molecular composition of SOA under atmospherically relevant conditions and to apply this technology to controlled simulation chamber experiments and field measurements. The science we have completed with the methodology will impact the simulation of aerosol particles in climate models.

  10. Spectrum of diagnostic errors in radiology.

    Science.gov (United States)

    Pinto, Antonio; Brunese, Luca

    2010-10-28

    Diagnostic errors are important in all branches of medicine because they are an indication of poor patient care. Since the early 1970s, physicians have been subjected to an increasing number of medical malpractice claims. Radiology is one of the specialties most liable to claims of medical negligence. Most often, a plaintiff's complaint against a radiologist will focus on a failure to diagnose. The etiology of radiological error is multi-factorial. Errors fall into recurrent patterns. Errors arise from poor technique, failures of perception, lack of knowledge and misjudgments. The work of diagnostic radiology consists of the complete detection of all abnormalities in an imaging examination and their accurate diagnosis. Every radiologist should understand the sources of error in diagnostic radiology as well as the elements of negligence that form the basis of malpractice litigation. Error traps need to be uncovered and highlighted, in order to prevent repetition of the same mistakes. This article focuses on the spectrum of diagnostic errors in radiology, including a classification of the errors, and stresses the malpractice issues in mammography, chest radiology and obstetric sonography. Missed fractures in emergency and communication issues between radiologists and physicians are also discussed.

  11. Planned upgrade to the coaxial plasma source facility for high heat flux plasma flows relevant to tokamak disruption simulations

    International Nuclear Information System (INIS)

    Caress, R.W.; Mayo, R.M.; Carter, T.A.

    1995-01-01

    Plasma disruptions in tokamaks remain serious obstacles to the demonstration of economical fusion power. In disruption simulation experiments, some important effects have not been taken into account. Present disruption simulation experimental data do not include effects of the high magnetic fields expected near the PFCs in a tokamak major disruption. In addition, temporal and spatial scales are much too short in present simulation devices to be of direct relevance to tokamak disruptions. To address some of these inadequacies, an experimental program is planned at North Carolina State University employing an upgrade to the Coaxial Plasma Source (CPS-1) magnetized coaxial plasma gun facility. The advantages of the CPS-1 plasma source over present disruption simulation devices include the ability to irradiate large material samples at extremely high areal energy densities, and the ability to perform these material studies in the presence of a high magnetic field. Other tokamak disruption relevant features of CPS-1U include a high ion temperature, high electron temperature, and long pulse length

  12. Error analysis and system improvements in phase-stepping methods for photoelasticity

    International Nuclear Information System (INIS)

    Wenyan Ji

    1997-11-01

    In the past automated photoelasticity has been demonstrated to be one of the most efficient technique for determining the complete state of stress in a 3-D component. However, the measurement accuracy, which depends on many aspects of both the theoretical foundations and experimental procedures, has not been studied properly. The objective of this thesis is to reveal the intrinsic properties of the errors, provide methods for reducing them and finally improve the system accuracy. A general formulation for a polariscope with all the optical elements in an arbitrary orientation was deduced using the method of Mueller Matrices. The deduction of this formulation indicates an inherent connectivity among the optical elements and gives a knowledge of the errors. In addition, this formulation also shows a common foundation among the photoelastic techniques, consequently, these techniques share many common error sources. The phase-stepping system proposed by Patterson and Wang was used as an exemplar to analyse the errors and provide the proposed improvements. This system can be divided into four parts according to their function, namely the optical system, light source, image acquisition equipment and image analysis software. All the possible error sources were investigated separately and the methods for reducing the influence of the errors and improving the system accuracy are presented. To identify the contribution of each possible error to the final system output, a model was used to simulate the errors and analyse their consequences. Therefore the contribution to the results from different error sources can be estimated quantitatively and finally the accuracy of the systems can be improved. For a conventional polariscope, the system accuracy can be as high as 99.23% for the fringe order and the error less than 5 degrees for the isoclinic angle. The PSIOS system is limited to the low fringe orders. For a fringe order of less than 1.5, the accuracy is 94.60% for fringe

  13. Economic impact of medication error: a systematic review.

    Science.gov (United States)

    Walsh, Elaine K; Hansen, Christina Raae; Sahm, Laura J; Kearney, Patricia M; Doherty, Edel; Bradley, Colin P

    2017-05-01

    Medication error is a significant source of morbidity and mortality among patients. Clinical and cost-effectiveness evidence are required for the implementation of quality of care interventions. Reduction of error-related cost is a key potential benefit of interventions addressing medication error. The aim of this review was to describe and quantify the economic burden associated with medication error. PubMed, Cochrane, Embase, CINAHL, EconLit, ABI/INFORM, Business Source Complete were searched. Studies published 2004-2016 assessing the economic impact of medication error were included. Cost values were expressed in Euro 2015. A narrative synthesis was performed. A total of 4572 articles were identified from database searching, and 16 were included in the review. One study met all applicable quality criteria. Fifteen studies expressed economic impact in monetary terms. Mean cost per error per study ranged from €2.58 to €111 727.08. Healthcare costs were used to measure economic impact in 15 of the included studies with one study measuring litigation costs. Four studies included costs incurred in primary care with the remaining 12 measuring hospital costs. Five studies looked at general medication error in a general population with 11 studies reporting the economic impact of an individual type of medication error or error within a specific patient population. Considerable variability existed between studies in terms of financial cost, patients, settings and errors included. Many were of poor quality. Assessment of economic impact was conducted predominantly in the hospital setting with little assessment of primary care impact. Limited parameters were used to establish economic impact. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  14. How to Avoid Errors in Error Propagation: Prediction Intervals and Confidence Intervals in Forest Biomass

    Science.gov (United States)

    Lilly, P.; Yanai, R. D.; Buckley, H. L.; Case, B. S.; Woollons, R. C.; Holdaway, R. J.; Johnson, J.

    2016-12-01

    Calculations of forest biomass and elemental content require many measurements and models, each contributing uncertainty to the final estimates. While sampling error is commonly reported, based on replicate plots, error due to uncertainty in the regression used to estimate biomass from tree diameter is usually not quantified. Some published estimates of uncertainty due to the regression models have used the uncertainty in the prediction of individuals, ignoring uncertainty in the mean, while others have propagated uncertainty in the mean while ignoring individual variation. Using the simple case of the calcium concentration of sugar maple leaves, we compare the variation among individuals (the standard deviation) to the uncertainty in the mean (the standard error) and illustrate the declining importance in the prediction of individual concentrations as the number of individuals increases. For allometric models, the analogous statistics are the prediction interval (or the residual variation in the model fit) and the confidence interval (describing the uncertainty in the best fit model). The effect of propagating these two sources of error is illustrated using the mass of sugar maple foliage. The uncertainty in individual tree predictions was large for plots with few trees; for plots with 30 trees or more, the uncertainty in individuals was less important than the uncertainty in the mean. Authors of previously published analyses have reanalyzed their data to show the magnitude of these two sources of uncertainty in scales ranging from experimental plots to entire countries. The most correct analysis will take both sources of uncertainty into account, but for practical purposes, country-level reports of uncertainty in carbon stocks, as required by the IPCC, can ignore the uncertainty in individuals. Ignoring the uncertainty in the mean will lead to exaggerated estimates of confidence in estimates of forest biomass and carbon and nutrient contents.

  15. SU-F-T-310: Does a Head-Mounted Ionization Chamber Detect IMRT Errors?

    International Nuclear Information System (INIS)

    Wegener, S; Herzog, B; Sauer, O

    2016-01-01

    Purpose: The conventional plan verification strategy is delivering a plan to a QA-phantom before the first treatment. Monitoring each fraction of the patient treatment in real-time would improve patient safety. We evaluated how well a new detector, the IQM (iRT Systems, Germany), is capable of detecting errors we induced into IMRT plans of three different treatment regions. Results were compared to an established phantom. Methods: Clinical plans of a brain, prostate and head-and-neck patient were modified in the Pinnacle planning system, such that they resulted in either several percent lower prescribed doses to the target volume or several percent higher doses to relevant organs at risk. Unaltered plans were measured on three days, modified plans once, each with the IQM at an Elekta Synergy with an Agility MLC. All plans were also measured with the ArcCHECK with the cavity plug and a PTW semiflex 31010 ionization chamber inserted. Measurements were evaluated with SNC patient software. Results: Repeated IQM measurements of the original plans were reproducible, such that a 1% deviation from the mean as warning and 3% as action level as suggested by the manufacturer seemed reasonable. The IQM detected most of the simulated errors including wrong energy, a faulty leaf, wrong trial exported and a 2 mm shift of one leaf bank. Detection limits were reached for two plans - a 2 mm field position error and a leaf bank offset combined with an MU change. ArcCHECK evaluation according to our current standards also left undetected errors. Ionization chamber evaluation alone would leave most errors undetected. Conclusion: The IQM detected most errors and performed as well as currently established phantoms with the advantage that it can be used throughout the whole treatment. Drawback is that it does not indicate the source of the error.

  16. SU-F-T-310: Does a Head-Mounted Ionization Chamber Detect IMRT Errors?

    Energy Technology Data Exchange (ETDEWEB)

    Wegener, S; Herzog, B; Sauer, O [University of Wuerzburg, Wuerzburg (Germany)

    2016-06-15

    Purpose: The conventional plan verification strategy is delivering a plan to a QA-phantom before the first treatment. Monitoring each fraction of the patient treatment in real-time would improve patient safety. We evaluated how well a new detector, the IQM (iRT Systems, Germany), is capable of detecting errors we induced into IMRT plans of three different treatment regions. Results were compared to an established phantom. Methods: Clinical plans of a brain, prostate and head-and-neck patient were modified in the Pinnacle planning system, such that they resulted in either several percent lower prescribed doses to the target volume or several percent higher doses to relevant organs at risk. Unaltered plans were measured on three days, modified plans once, each with the IQM at an Elekta Synergy with an Agility MLC. All plans were also measured with the ArcCHECK with the cavity plug and a PTW semiflex 31010 ionization chamber inserted. Measurements were evaluated with SNC patient software. Results: Repeated IQM measurements of the original plans were reproducible, such that a 1% deviation from the mean as warning and 3% as action level as suggested by the manufacturer seemed reasonable. The IQM detected most of the simulated errors including wrong energy, a faulty leaf, wrong trial exported and a 2 mm shift of one leaf bank. Detection limits were reached for two plans - a 2 mm field position error and a leaf bank offset combined with an MU change. ArcCHECK evaluation according to our current standards also left undetected errors. Ionization chamber evaluation alone would leave most errors undetected. Conclusion: The IQM detected most errors and performed as well as currently established phantoms with the advantage that it can be used throughout the whole treatment. Drawback is that it does not indicate the source of the error.

  17. Research on Human-Error Factors of Civil Aircraft Pilots Based On Grey Relational Analysis

    Directory of Open Access Journals (Sweden)

    Guo Yundong

    2018-01-01

    Full Text Available In consideration of the situation that civil aviation accidents involve many human-error factors and show the features of typical grey systems, an index system of civil aviation accident human-error factors is built using human factor analysis and classification system model. With the data of accidents happened worldwide between 2008 and 2011, the correlation between human-error factors can be analyzed quantitatively using the method of grey relational analysis. Research results show that the order of main factors affecting pilot human-error factors is preconditions for unsafe acts, unsafe supervision, organization and unsafe acts. The factor related most closely with second-level indexes and pilot human-error factors is the physical/mental limitations of pilots, followed by supervisory violations. The relevancy between the first-level indexes and the corresponding second-level indexes and the relevancy between second-level indexes can also be analyzed quantitatively.

  18. Optimal full motion video registration with rigorous error propagation

    Science.gov (United States)

    Dolloff, John; Hottel, Bryant; Doucette, Peter; Theiss, Henry; Jocher, Glenn

    2014-06-01

    Optimal full motion video (FMV) registration is a crucial need for the Geospatial community. It is required for subsequent and optimal geopositioning with simultaneous and reliable accuracy prediction. An overall approach being developed for such registration is presented that models relevant error sources in terms of the expected magnitude and correlation of sensor errors. The corresponding estimator is selected based on the level of accuracy of the a priori information of the sensor's trajectory and attitude (pointing) information, in order to best deal with non-linearity effects. Estimator choices include near real-time Kalman Filters and batch Weighted Least Squares. Registration solves for corrections to the sensor a priori information for each frame. It also computes and makes available a posteriori accuracy information, i.e., the expected magnitude and correlation of sensor registration errors. Both the registered sensor data and its a posteriori accuracy information are then made available to "down-stream" Multi-Image Geopositioning (MIG) processes. An object of interest is then measured on the registered frames and a multi-image optimal solution, including reliable predicted solution accuracy, is then performed for the object's 3D coordinates. This paper also describes a robust approach to registration when a priori information of sensor attitude is unavailable. It makes use of structure-from-motion principles, but does not use standard Computer Vision techniques, such as estimation of the Essential Matrix which can be very sensitive to noise. The approach used instead is a novel, robust, direct search-based technique.

  19. Error monitoring issues for common channel signaling

    Science.gov (United States)

    Hou, Victor T.; Kant, Krishna; Ramaswami, V.; Wang, Jonathan L.

    1994-04-01

    Motivated by field data which showed a large number of link changeovers and incidences of link oscillations between in-service and out-of-service states in common channel signaling (CCS) networks, a number of analyses of the link error monitoring procedures in the SS7 protocol were performed by the authors. This paper summarizes the results obtained thus far and include the following: (1) results of an exact analysis of the performance of the error monitoring procedures under both random and bursty errors; (2) a demonstration that there exists a range of error rates within which the error monitoring procedures of SS7 may induce frequent changeovers and changebacks; (3) an analysis of the performance ofthe SS7 level-2 transmission protocol to determine the tolerable error rates within which the delay requirements can be met; (4) a demonstration that the tolerable error rate depends strongly on various link and traffic characteristics, thereby implying that a single set of error monitor parameters will not work well in all situations; (5) some recommendations on a customizable/adaptable scheme of error monitoring with a discussion on their implementability. These issues may be particularly relevant in the presence of anticipated increases in SS7 traffic due to widespread deployment of Advanced Intelligent Network (AIN) and Personal Communications Service (PCS) as well as for developing procedures for high-speed SS7 links currently under consideration by standards bodies.

  20. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    of generic run-time error types, design of methods of observing application software behaviorduring execution and design of methods of evaluating run time constraints. In the definition of error types it is attempted to cover all relevant aspects of the application softwaree behavior. Methods of observation......In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...

  1. Systematic Errors in Dimensional X-ray Computed Tomography

    DEFF Research Database (Denmark)

    that it is possible to compensate them. In dimensional X-ray computed tomography (CT), many physical quantities influence the final result. However, it is important to know which factors in CT measurements potentially lead to systematic errors. In this talk, typical error sources in dimensional X-ray CT are discussed...

  2. European Legislation to Prevent Loss of Control of Sources and to Recover Orphan Sources, and Other Requirements Relevant to the Scrap Metal Industry

    Energy Technology Data Exchange (ETDEWEB)

    Janssens, A.; Tanner, V.; Mundigl, S., E-mail: augustin.janssens@ec.europa.eu [European Commission (Luxembourg)

    2011-07-15

    European legislation (Council Directive 2003/122/EURATOM) has been adopted with regard to the control of high-activity sealed radioactive sources (HASS). This Directive is now part of an overall recast of current radiation protection legislation. At the same time the main Directive, 96/29/EURATOM, laying down Basic Safety Standards (BSS) for the health protection of the general public and workers against the dangers of ionizing radiation, is being revised in the light of the new recommendations of the International Commission on Radiological Protection (ICRP). The provisions for exemption and clearance are a further relevant feature of the new BSS. The current issues emerging from the revision and recast of the BSS are discussed, in the framework of the need to protect the scrap metal industry from orphan sources and to manage contaminated metal products. (author)

  3. Performance of multi-aperture grid extraction systems for an ITER-relevant RF-driven negative hydrogen ion source

    Science.gov (United States)

    Franzen, P.; Gutser, R.; Fantz, U.; Kraus, W.; Falter, H.; Fröschle, M.; Heinemann, B.; McNeely, P.; Nocentini, R.; Riedl, R.; Stäbler, A.; Wünderlich, D.

    2011-07-01

    The ITER neutral beam system requires a negative hydrogen ion beam of 48 A with an energy of 0.87 MeV, and a negative deuterium beam of 40 A with an energy of 1 MeV. The beam is extracted from a large ion source of dimension 1.9 × 0.9 m2 by an acceleration system consisting of seven grids with 1280 apertures each. Currently, apertures with a diameter of 14 mm in the first grid are foreseen. In 2007, the IPP RF source was chosen as the ITER reference source due to its reduced maintenance compared with arc-driven sources and the successful development at the BATMAN test facility of being equipped with the small IPP prototype RF source ( {\\sim}\\frac{1}{8} of the area of the ITER NBI source). These results, however, were obtained with an extraction system with 8 mm diameter apertures. This paper reports on the comparison of the source performance at BATMAN of an ITER-relevant extraction system equipped with chamfered apertures with a 14 mm diameter and 8 mm diameter aperture extraction system. The most important result is that there is almost no difference in the achieved current density—being consistent with ion trajectory calculations—and the amount of co-extracted electrons. Furthermore, some aspects of the beam optics of both extraction systems are discussed.

  4. Performance of multi-aperture grid extraction systems for an ITER-relevant RF-driven negative hydrogen ion source

    International Nuclear Information System (INIS)

    Franzen, P.; Gutser, R.; Fantz, U.; Kraus, W.; Falter, H.; Froeschle, M.; Heinemann, B.; McNeely, P.; Nocentini, R.; Riedl, R.; Staebler, A.; Wuenderlich, D.

    2011-01-01

    The ITER neutral beam system requires a negative hydrogen ion beam of 48 A with an energy of 0.87 MeV, and a negative deuterium beam of 40 A with an energy of 1 MeV. The beam is extracted from a large ion source of dimension 1.9 x 0.9 m 2 by an acceleration system consisting of seven grids with 1280 apertures each. Currently, apertures with a diameter of 14 mm in the first grid are foreseen. In 2007, the IPP RF source was chosen as the ITER reference source due to its reduced maintenance compared with arc-driven sources and the successful development at the BATMAN test facility of being equipped with the small IPP prototype RF source ( ∼ 1/8 of the area of the ITER NBI source). These results, however, were obtained with an extraction system with 8 mm diameter apertures. This paper reports on the comparison of the source performance at BATMAN of an ITER-relevant extraction system equipped with chamfered apertures with a 14 mm diameter and 8 mm diameter aperture extraction system. The most important result is that there is almost no difference in the achieved current density-being consistent with ion trajectory calculations-and the amount of co-extracted electrons. Furthermore, some aspects of the beam optics of both extraction systems are discussed.

  5. Phylogeny and source climate impact seed dormancy and germination of restoration-relevant forb species.

    Science.gov (United States)

    Seglias, Alexandra E; Williams, Evelyn; Bilge, Arman; Kramer, Andrea T

    2018-01-01

    For many species and seed sources used in restoration activities, specific seed germination requirements are often unknown. Because seed dormancy and germination traits can be constrained by phylogenetic history, related species are often assumed to have similar traits. However, significant variation in these traits is also present within species as a result of adaptation to local climatic conditions. A growing number of studies have attempted to disentangle how phylogeny and climate influence seed dormancy and germination traits, but they have focused primarily on species-level effects, ignoring potential population-level variation. We examined the relationships between phylogeny, climate, and seed dormancy and germination traits for 24 populations of eight native, restoration-relevant forb species found in a wide range of climatic conditions in the Southwest United States. The seeds were exposed to eight temperature and stratification length regimes designed to mimic regional climatic conditions. Phylogenetic relatedness, overall climatic conditions, and temperature conditions at the site were all significantly correlated with final germination response, with significant among-population variation in germination response across incubation treatments for seven of our eight study species. Notably, germination during stratification was significantly predicted by precipitation seasonality and differed significantly among populations for seven species. While previous studies have not examined germination during stratification as a potential trait influencing overall germination response, our results suggest that this trait should be included in germination studies as well as seed sourcing decisions. Results of this study deepen our understanding of the relationships between source climate, species identity, and germination, leading to improved seed sourcing decisions for restorations.

  6. Limit of detection in the presence of instrumental and non-instrumental errors: study of the possible sources of error and application to the analysis of 41 elements at trace levels by inductively coupled plasma-mass spectrometry technique

    International Nuclear Information System (INIS)

    Badocco, Denis; Lavagnini, Irma; Mondin, Andrea; Tapparo, Andrea; Pastore, Paolo

    2015-01-01

    In this paper the detection limit was estimated when signals were affected by two error contributions, namely instrumental errors and operational-non-instrumental errors. The detection limit was theoretically obtained following the hypothesis testing schema implemented with the calibration curve methodology. The experimental calibration design was based on J standards measured I times with non-instrumental errors affecting each standard systematically but randomly among the J levels. A two-component variance regression was performed to determine the calibration curve and to define the detection limit in these conditions. The detection limit values obtained from the calibration at trace levels of 41 elements by ICP-MS resulted larger than those obtainable from a one component variance regression. The role of the reagent impurities on the instrumental errors was ascertained and taken into account. Environmental pollution was studied as source of non-instrumental errors. The environmental pollution role was evaluated by Principal Component Analysis technique (PCA) applied to a series of nine calibrations performed in fourteen months. The influence of the seasonality of the environmental pollution on the detection limit was evidenced for many elements usually present in the urban air particulate. The obtained results clearly indicated the need of using the two-component variance regression approach for the calibration of all the elements usually present in the environment at significant concentration levels. - Highlights: • Limit of detection was obtained considering a two variance component regression. • Calibration data may be affected by instrumental and operational conditions errors. • Calibration model was applied to determine 41 elements at trace level by ICP-MS. • Non instrumental errors were evidenced by PCA analysis

  7. The impact of work-related stress on medication errors in Eastern Region Saudi Arabia.

    Science.gov (United States)

    Salam, Abdul; Segal, David M; Abu-Helalah, Munir Ahmad; Gutierrez, Mary Lou; Joosub, Imran; Ahmed, Wasim; Bibi, Rubina; Clarke, Elizabeth; Qarni, Ali Ahmed Al

    2018-05-07

    To examine the relationship between overall level and source-specific work-related stressors on medication errors rate. A cross-sectional study examined the relationship between overall levels of stress, 25 source-specific work-related stressors and medication error rate based on documented incident reports in Saudi Arabia (SA) hospital, using secondary databases. King Abdulaziz Hospital in Al-Ahsa, Eastern Region, SA. Two hundred and sixty-nine healthcare professionals (HCPs). The odds ratio (OR) and corresponding 95% confidence interval (CI) for HCPs documented incident report medication errors and self-reported sources of Job Stress Survey. Multiple logistic regression analysis identified source-specific work-related stress as significantly associated with HCPs who made at least one medication error per month (P stress were two times more likely to make at least one medication error per month than non-stressed HCPs (OR: 1.95, P = 0.081). This is the first study to use documented incident reports for medication errors rather than self-report to evaluate the level of stress-related medication errors in SA HCPs. Job demands, such as social stressors (home life disruption, difficulties with colleagues), time pressures, structural determinants (compulsory night/weekend call duties) and higher income, were significantly associated with medication errors whereas overall stress revealed a 2-fold higher trend.

  8. VR-based training and assessment in ultrasound-guided regional anesthesia: from error analysis to system design.

    LENUS (Irish Health Repository)

    2011-01-01

    If VR-based medical training and assessment is to improve patient care and safety (i.e. a genuine health gain), it has to be based on clinically relevant measurement of performance. Metrics on errors are particularly useful for capturing and correcting undesired behaviors before they occur in the operating room. However, translating clinically relevant metrics and errors into meaningful system design is a challenging process. This paper discusses how an existing task and error analysis was translated into the system design of a VR-based training and assessment environment for Ultrasound Guided Regional Anesthesia (UGRA).

  9. From the Lab to the real world : sources of error in UF {sub 6} gas enrichment monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Lombardi, Marcie L.

    2012-03-01

    {sub 6} gas enrichment monitors have required empty pipe measurements to accurately determine the pipe attenuation (the pipe attenuation is typically much larger than the attenuation in the gas). This dissertation reports on a method for determining the thickness of a pipe in a GCEP when obtaining an empty pipe measurement may not be feasible. This dissertation studies each of the components that may add to the final error in the enrichment measurement, and the factors that were taken into account to mitigate these issues are also detailed and tested. The use of an x-ray generator as a transmission source and the attending stability issues are addressed. Both analytical calculations and experimental measurements have been used. For completeness, some real-world analysis results from the URENCO Capenhurst enrichment plant have been included, where the final enrichment error has remained well below 1% for approximately two months.

  10. Collection of offshore human error probability data

    International Nuclear Information System (INIS)

    Basra, Gurpreet; Kirwan, Barry

    1998-01-01

    Accidents such as Piper Alpha have increased concern about the effects of human errors in complex systems. Such accidents can in theory be predicted and prevented by risk assessment, and in particular human reliability assessment (HRA), but HRA ideally requires qualitative and quantitative human error data. A research initiative at the University of Birmingham led to the development of CORE-DATA, a Computerised Human Error Data Base. This system currently contains a reasonably large number of human error data points, collected from a variety of mainly nuclear-power related sources. This article outlines a recent offshore data collection study, concerned with collecting lifeboat evacuation data. Data collection methods are outlined and a selection of human error probabilities generated as a result of the study are provided. These data give insights into the type of errors and human failure rates that could be utilised to support offshore risk analyses

  11. Understanding and Confronting Our Mistakes: The Epidemiology of Error in Radiology and Strategies for Error Reduction.

    Science.gov (United States)

    Bruno, Michael A; Walker, Eric A; Abujudeh, Hani H

    2015-10-01

    Arriving at a medical diagnosis is a highly complex process that is extremely error prone. Missed or delayed diagnoses often lead to patient harm and missed opportunities for treatment. Since medical imaging is a major contributor to the overall diagnostic process, it is also a major potential source of diagnostic error. Although some diagnoses may be missed because of the technical or physical limitations of the imaging modality, including image resolution, intrinsic or extrinsic contrast, and signal-to-noise ratio, most missed radiologic diagnoses are attributable to image interpretation errors by radiologists. Radiologic interpretation cannot be mechanized or automated; it is a human enterprise based on complex psychophysiologic and cognitive processes and is itself subject to a wide variety of error types, including perceptual errors (those in which an important abnormality is simply not seen on the images) and cognitive errors (those in which the abnormality is visually detected but the meaning or importance of the finding is not correctly understood or appreciated). The overall prevalence of radiologists' errors in practice does not appear to have changed since it was first estimated in the 1960s. The authors review the epidemiology of errors in diagnostic radiology, including a recently proposed taxonomy of radiologists' errors, as well as research findings, in an attempt to elucidate possible underlying causes of these errors. The authors also propose strategies for error reduction in radiology. On the basis of current understanding, specific suggestions are offered as to how radiologists can improve their performance in practice. © RSNA, 2015.

  12. An Empirical State Error Covariance Matrix Orbit Determination Example

    Science.gov (United States)

    Frisbee, Joseph H., Jr.

    2015-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance

  13. Review of U.S. Army Unmanned Aerial Systems Accident Reports: Analysis of Human Error Contributions

    Science.gov (United States)

    2018-03-20

    within report documents. The information presented was obtained through a request to use the U.S. Army Combat Readiness Center’s Risk Management ...controlled flight into terrain (13 accidents), fueling errors by improper techniques (7 accidents), and a variety of maintenance errors (10 accidents). The...and 9 of the 10 maintenance accidents. Table 4. Frequencies Based on Source of Human Error Human error source Presence Poor Planning

  14. Error management for musicians: an interdisciplinary conceptual framework.

    Science.gov (United States)

    Kruse-Weber, Silke; Parncutt, Richard

    2014-01-01

    Musicians tend to strive for flawless performance and perfection, avoiding errors at all costs. Dealing with errors while practicing or performing is often frustrating and can lead to anger and despair, which can explain musicians' generally negative attitude toward errors and the tendency to aim for flawless learning in instrumental music education. But even the best performances are rarely error-free, and research in general pedagogy and psychology has shown that errors provide useful information for the learning process. Research in instrumental pedagogy is still neglecting error issues; the benefits of risk management (before the error) and error management (during and after the error) are still underestimated. It follows that dealing with errors is a key aspect of music practice at home, teaching, and performance in public. And yet, to be innovative, or to make their performance extraordinary, musicians need to risk errors. Currently, most music students only acquire the ability to manage errors implicitly - or not at all. A more constructive, creative, and differentiated culture of errors would balance error tolerance and risk-taking against error prevention in ways that enhance music practice and music performance. The teaching environment should lay the foundation for the development of such an approach. In this contribution, we survey recent research in aviation, medicine, economics, psychology, and interdisciplinary decision theory that has demonstrated that specific error-management training can promote metacognitive skills that lead to better adaptive transfer and better performance skills. We summarize how this research can be applied to music, and survey-relevant research that is specifically tailored to the needs of musicians, including generic guidelines for risk and error management in music teaching and performance. On this basis, we develop a conceptual framework for risk management that can provide orientation for further music education and

  15. Error management for musicians: an interdisciplinary conceptual framework

    Directory of Open Access Journals (Sweden)

    Silke eKruse-Weber

    2014-07-01

    Full Text Available Musicians tend to strive for flawless performance and perfection, avoiding errors at all costs. Dealing with errors while practicing or performing is often frustrating and can lead to anger and despair, which can explain musicians’ generally negative attitude toward errors and the tendency to aim for errorless learning in instrumental music education. But even the best performances are rarely error-free, and research in general pedagogy and psychology has shown that errors provide useful information for the learning process. Research in instrumental pedagogy is still neglecting error issues; the benefits of risk management (before the error and error management (during and after the error are still underestimated. It follows that dealing with errors is a key aspect of music practice at home, teaching, and performance in public. And yet, to be innovative, or to make their performance extraordinary, musicians need to risk errors. Currently, most music students only acquire the ability to manage errors implicitly - or not at all. A more constructive, creative and differentiated culture of errors would balance error tolerance and risk-taking against error prevention in ways that enhance music practice and music performance. The teaching environment should lay the foundation for the development of these abilities. In this contribution, we survey recent research in aviation, medicine, economics, psychology, and interdisciplinary decision theory that has demonstrated that specific error-management training can promote metacognitive skills that lead to better adaptive transfer and better performance skills. We summarize how this research can be applied to music, and survey relevant research that is specifically tailored to the needs of musicians, including generic guidelines for risk and error management in music teaching and performance. On this basis, we develop a conceptual framework for risk management that can provide orientation for further

  16. Reduction of weighing errors caused by tritium decay heating

    International Nuclear Information System (INIS)

    Shaw, J.F.

    1978-01-01

    The deuterium-tritium source gas mixture for laser targets is formulated by weight. Experiments show that the maximum weighing error caused by tritium decay heating is 0.2% for a 104-cm 3 mix vessel. Air cooling the vessel reduces the weighing error by 90%

  17. MEDICAL ERROR: CIVIL AND LEGAL ASPECT.

    Science.gov (United States)

    Buletsa, S; Drozd, O; Yunin, O; Mohilevskyi, L

    2018-03-01

    The scientific article is focused on the research of the notion of medical error, medical and legal aspects of this notion have been considered. The necessity of the legislative consolidation of the notion of «medical error» and criteria of its legal estimation have been grounded. In the process of writing a scientific article, we used the empirical method, general scientific and comparative legal methods. A comparison of the concept of medical error in civil and legal aspects was made from the point of view of Ukrainian, European and American scientists. It has been marked that the problem of medical errors is known since ancient times and in the whole world, in fact without regard to the level of development of medicine, there is no country, where doctors never make errors. According to the statistics, medical errors in the world are included in the first five reasons of death rate. At the same time the grant of medical services practically concerns all people. As a man and his life, health in Ukraine are acknowledged by a higher social value, medical services must be of high-quality and effective. The grant of not quality medical services causes harm to the health, and sometimes the lives of people; it may result in injury or even death. The right to the health protection is one of the fundamental human rights assured by the Constitution of Ukraine; therefore the issue of medical errors and liability for them is extremely relevant. The authors make conclusions, that the definition of the notion of «medical error» must get the legal consolidation. Besides, the legal estimation of medical errors must be based on the single principles enshrined in the legislation and confirmed by judicial practice.

  18. Error-related brain activity and error awareness in an error classification paradigm.

    Science.gov (United States)

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. A survey of camera error sources in machine vision systems

    Science.gov (United States)

    Jatko, W. B.

    In machine vision applications, such as an automated inspection line, television cameras are commonly used to record scene intensity in a computer memory or frame buffer. Scene data from the image sensor can then be analyzed with a wide variety of feature-detection techniques. Many algorithms found in textbooks on image processing make the implicit simplifying assumption of an ideal input image with clearly defined edges and uniform illumination. The ideal image model is helpful to aid the student in understanding the principles of operation, but when these algorithms are blindly applied to real-world images the results can be unsatisfactory. This paper examines some common measurement errors found in camera sensors and their underlying causes, and possible methods of error compensation. The role of the camera in a typical image-processing system is discussed, with emphasis on the origination of signal distortions. The effects of such things as lighting, optics, and sensor characteristics are considered.

  20. [Sources of error in the European Pharmacopoeia assay of halide salts of organic bases by titration with alkali].

    Science.gov (United States)

    Kószeginé, S H; Ráfliné, R Z; Paál, T; Török, I

    2000-01-01

    A short overview has been given by the authors on the titrimetric assay methods of halide salts of organic bases in the pharmacopoeias of greatest importance. The alternative procedures introduced by the European Pharmacopoeia Commission some years ago to replace the non-aqueous titration with perchloric acid in the presence of mercuric acetate have also been presented and evaluated. The authors investigated the limits of applicability and the sources of systematic errors (bias) of the strongly preferred titration with sodium hydroxide in an alcoholic medium. To assess the bias due to the differences between the results calculated from the two inflexion points of the titration curves and the two real endpoints corresponding to the strong and weak acids, respectively, the mathematical analysis of the titration curve function was carried out. This bias, generally negligible when the pH change near the endpoint of the titration is more than 1 unit, is the function of the concentration, the apparent pK of the analyte and the ionic product of water (ethanol) in the alcohol-water mixtures. Using the validation data gained for the method with the titration of ephedrine hydrochloride the authors analysed the impact of carbon dioxide in the titration medium on the additive and proportional systematic errors of the method. The newly introduced standardisation procedure of the European Pharmacopoeia for the sodium hydroxide titrant to decrease the systematic errors caused by carbon dioxide has also been evaluated.

  1. Estimation of error fields from ferromagnetic parts in ITER

    Energy Technology Data Exchange (ETDEWEB)

    Oliva, A. Bonito [Fusion for Energy (Spain); Chiariello, A.G.; Formisano, A.; Martone, R. [Ass. EURATOM/ENEA/CREATE, Dip. di Ing. Industriale e dell’Informazione, Seconda Università di Napoli, Via Roma 29, I-81031 Napoli (Italy); Portone, A., E-mail: alfredo.portone@f4e.europa.eu [Fusion for Energy (Spain); Testoni, P. [Fusion for Energy (Spain)

    2013-10-15

    Highlights: ► The paper deals with error fields generated in ITER by magnetic masses. ► Magnetization state is computed from simplified FEM models. ► Closed form expressions adopted for the flux density of magnetized parts are given. ► Such expressions allow to simplify the estimation of the effect of iron pieces (or lack of) on error field. -- Abstract: Error fields in tokamaks are small departures from the exact axisymmetry of the ideal magnetic field configuration. Their reduction below a threshold value by the error field correction coils is essential since sufficiently large static error fields lead to discharge disruption. The error fields are originated not only by magnets fabrication and installation tolerances, by the joints and by the busbars, but also by the presence of ferromagnetic elements. It was shown that superconducting joints, feeders and busbars play a secondary effect; however in order to estimate of the importance of each possible error field source, rough evaluations can be very useful because it can provide an order of magnitude of the correspondent effect and, therefore, a ranking in the request for in depth analysis. The paper proposes a two steps procedure. The first step aims to get the approximate magnetization state of ferromagnetic parts; the second aims to estimate the full 3D error field over the whole volume using equivalent sources for magnetic masses and taking advantage from well assessed approximate closed form expressions, well suited for the far distance effects.

  2. Impact of error fields on plasma identification in ITER

    Energy Technology Data Exchange (ETDEWEB)

    Martone, R., E-mail: Raffaele.Martone@unina2.it [Ass. EURATOM/ENEA/CREATE, Seconda Università di Napoli, Via Roma 29, Aversa (CE) (Italy); Appel, L. [EURATOM/CCFE Fusion Association, Culham Science Centre, Abingdon (United Kingdom); Chiariello, A.G.; Formisano, A.; Mattei, M. [Ass. EURATOM/ENEA/CREATE, Seconda Università di Napoli, Via Roma 29, Aversa (CE) (Italy); Pironti, A. [Ass. EURATOM/ENEA/CREATE, Università degli Studi di Napoli “Federico II”, Via Claudio 25, Napoli (Italy)

    2013-10-15

    Highlights: ► The paper deals with the effect on plasma identification of error fields generated by field coils manufacturing and assembly errors. ► EFIT++ is used to identify plasma gaps when poloidal field coils and central solenoid coils are deformed, and the gaps sensitivity with respect to such errors is analyzed. ► Some examples of reconstruction errors in the presence of deformations are reported. -- Abstract: The active control of plasma discharges in present Tokamak devices must be prompt and accurate to guarantee expected performance. As a consequence, the identification step, calculating plasma parameters from diagnostics, should provide in a very short time reliable estimates of the relevant quantities, such as plasma centroid position, plasma-wall distances at given points called gaps, and other geometrical parameters as elongation and triangularity. To achieve the desired response promptness, a number of simplifying assumptions are usually made in the identification algorithms. Among those clearly affecting the quality of the plasma parameters reconstruction, one of the most relevant is the precise knowledge of the magnetic field produced by active coils. Since uncertainties in their manufacturing and assembly process may cause misalignments between the actual and expected geometry and position of magnets, an analysis on the effect of possible wrong information about magnets on the plasma shape identification is documented in this paper.

  3. The concept of error and malpractice in radiology.

    Science.gov (United States)

    Pinto, Antonio; Brunese, Luca; Pinto, Fabio; Reali, Riccardo; Daniele, Stefania; Romano, Luigia

    2012-08-01

    Since the early 1970s, physicians have been subjected to an increasing number of medical malpractice claims. Radiology is one of the specialties most liable to claims of medical negligence. The etiology of radiological error is multifactorial. Errors fall into recurrent patterns. Errors arise from poor technique, failures of perception, lack of knowledge, and misjudgments. Every radiologist should understand the sources of error in diagnostic radiology as well as the elements of negligence that form the basis of malpractice litigation. Errors are an inevitable part of human life, and every health professional has made mistakes. To improve patient safety and reduce the risk from harm, we must accept that some errors are inevitable during the delivery of health care. We must play a cultural change in medicine, wherein errors are actively sought, openly discussed, and aggressively addressed. Copyright © 2012 Elsevier Inc. All rights reserved.

  4. Measurement Error Estimation for Capacitive Voltage Transformer by Insulation Parameters

    Directory of Open Access Journals (Sweden)

    Bin Chen

    2017-03-01

    Full Text Available Measurement errors of a capacitive voltage transformer (CVT are relevant to its equivalent parameters for which its capacitive divider contributes the most. In daily operation, dielectric aging, moisture, dielectric breakdown, etc., it will exert mixing effects on a capacitive divider’s insulation characteristics, leading to fluctuation in equivalent parameters which result in the measurement error. This paper proposes an equivalent circuit model to represent a CVT which incorporates insulation characteristics of a capacitive divider. After software simulation and laboratory experiments, the relationship between measurement errors and insulation parameters is obtained. It indicates that variation of insulation parameters in a CVT will cause a reasonable measurement error. From field tests and calculation, equivalent capacitance mainly affects magnitude error, while dielectric loss mainly affects phase error. As capacitance changes 0.2%, magnitude error can reach −0.2%. As dielectric loss factor changes 0.2%, phase error can reach 5′. An increase of equivalent capacitance and dielectric loss factor in the high-voltage capacitor will cause a positive real power measurement error. An increase of equivalent capacitance and dielectric loss factor in the low-voltage capacitor will cause a negative real power measurement error.

  5. An adaptive orienting theory of error processing.

    Science.gov (United States)

    Wessel, Jan R

    2018-03-01

    The ability to detect and correct action errors is paramount to safe and efficient goal-directed behaviors. Existing work on the neural underpinnings of error processing and post-error behavioral adaptations has led to the development of several mechanistic theories of error processing. These theories can be roughly grouped into adaptive and maladaptive theories. While adaptive theories propose that errors trigger a cascade of processes that will result in improved behavior after error commission, maladaptive theories hold that error commission momentarily impairs behavior. Neither group of theories can account for all available data, as different empirical studies find both impaired and improved post-error behavior. This article attempts a synthesis between the predictions made by prominent adaptive and maladaptive theories. Specifically, it is proposed that errors invoke a nonspecific cascade of processing that will rapidly interrupt and inhibit ongoing behavior and cognition, as well as orient attention toward the source of the error. It is proposed that this cascade follows all unexpected action outcomes, not just errors. In the case of errors, this cascade is followed by error-specific, controlled processing, which is specifically aimed at (re)tuning the existing task set. This theory combines existing predictions from maladaptive orienting and bottleneck theories with specific neural mechanisms from the wider field of cognitive control, including from error-specific theories of adaptive post-error processing. The article aims to describe the proposed framework and its implications for post-error slowing and post-error accuracy, propose mechanistic neural circuitry for post-error processing, and derive specific hypotheses for future empirical investigations. © 2017 Society for Psychophysiological Research.

  6. Finding errors in big data

    NARCIS (Netherlands)

    Puts, Marco; Daas, Piet; de Waal, A.G.

    No data source is perfect. Mistakes inevitably creep in. Spotting errors is hard enough when dealing with survey responses from several thousand people, but the difficulty is multiplied hugely when that mysterious beast Big Data comes into play. Statistics Netherlands is about to publish its first

  7. Consequences of leaf calibration errors on IMRT delivery

    International Nuclear Information System (INIS)

    Sastre-Padro, M; Welleweerd, J; Malinen, E; Eilertsen, K; Olsen, D R; Heide, U A van der

    2007-01-01

    IMRT treatments using multi-leaf collimators may involve a large number of segments in order to spare the organs at risk. When a large proportion of these segments are small, leaf positioning errors may become relevant and have therapeutic consequences. The performance of four head and neck IMRT treatments under eight different cases of leaf positioning errors has been studied. Systematic leaf pair offset errors in the range of ±2.0 mm were introduced, thus modifying the segment sizes of the original IMRT plans. Thirty-six films were irradiated with the original and modified segments. The dose difference and the gamma index (with 2%/2 mm criteria) were used for evaluating the discrepancies between the irradiated films. The median dose differences were linearly related to the simulated leaf pair errors. In the worst case, a 2.0 mm error generated a median dose difference of 1.5%. Following the gamma analysis, two out of the 32 modified plans were not acceptable. In conclusion, small systematic leaf bank positioning errors have a measurable impact on the delivered dose and may have consequences for the therapeutic outcome of IMRT

  8. Prescribing errors in a Brazilian neonatal intensive care unit

    Directory of Open Access Journals (Sweden)

    Ana Paula Cezar Machado

    2015-12-01

    Full Text Available Abstract Pediatric patients, especially those admitted to the neonatal intensive care unit (ICU, are highly vulnerable to medication errors. This study aimed to measure the prescription error rate in a university hospital neonatal ICU and to identify susceptible patients, types of errors, and the medicines involved. The variables related to medicines prescribed were compared to the Neofax prescription protocol. The study enrolled 150 newborns and analyzed 489 prescription order forms, with 1,491 medication items, corresponding to 46 drugs. Prescription error rate was 43.5%. Errors were found in dosage, intervals, diluents, and infusion time, distributed across 7 therapeutic classes. Errors were more frequent in preterm newborns. Diluent and dosing were the most frequent sources of errors. The therapeutic classes most involved in errors were antimicrobial agents and drugs that act on the nervous and cardiovascular systems.

  9. BAYES-HEP: Bayesian belief networks for estimation of human error probability

    International Nuclear Information System (INIS)

    Karthick, M.; Senthil Kumar, C.; Paul, Robert T.

    2017-01-01

    Human errors contribute a significant portion of risk in safety critical applications and methods for estimation of human error probability have been a topic of research for over a decade. The scarce data available on human errors and large uncertainty involved in the prediction of human error probabilities make the task difficult. This paper presents a Bayesian belief network (BBN) model for human error probability estimation in safety critical functions of a nuclear power plant. The developed model using BBN would help to estimate HEP with limited human intervention. A step-by-step illustration of the application of the method and subsequent evaluation is provided with a relevant case study and the model is expected to provide useful insights into risk assessment studies

  10. The Error Reporting in the ATLAS TDAQ System

    Science.gov (United States)

    Kolos, Serguei; Kazarov, Andrei; Papaevgeniou, Lykourgos

    2015-05-01

    The ATLAS Error Reporting provides a service that allows experts and shift crew to track and address errors relating to the data taking components and applications. This service, called the Error Reporting Service (ERS), gives to software applications the opportunity to collect and send comprehensive data about run-time errors, to a place where it can be intercepted in real-time by any other system component. Other ATLAS online control and monitoring tools use the ERS as one of their main inputs to address system problems in a timely manner and to improve the quality of acquired data. The actual destination of the error messages depends solely on the run-time environment, in which the online applications are operating. When an application sends information to ERS, depending on the configuration, it may end up in a local file, a database, distributed middleware which can transport it to an expert system or display it to users. Thanks to the open framework design of ERS, new information destinations can be added at any moment without touching the reporting and receiving applications. The ERS Application Program Interface (API) is provided in three programming languages used in the ATLAS online environment: C++, Java and Python. All APIs use exceptions for error reporting but each of them exploits advanced features of a given language to simplify the end-user program writing. For example, as C++ lacks language support for exceptions, a number of macros have been designed to generate hierarchies of C++ exception classes at compile time. Using this approach a software developer can write a single line of code to generate a boilerplate code for a fully qualified C++ exception class declaration with arbitrary number of parameters and multiple constructors, which encapsulates all relevant static information about the given type of issues. When a corresponding error occurs at run time, the program just need to create an instance of that class passing relevant values to one

  11. Reed-Solomon error-correction as a software patch mechanism.

    Energy Technology Data Exchange (ETDEWEB)

    Pendley, Kevin D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-11-01

    This report explores how error-correction data generated by a Reed-Solomon code may be used as a mechanism to apply changes to an existing installed codebase. Using the Reed-Solomon code to generate error-correction data for a changed or updated codebase will allow the error-correction data to be applied to an existing codebase to both validate and introduce changes or updates from some upstream source to the existing installed codebase.

  12. Endogeneity, Time-Varying Coefficients, and Incorrect vs. Correct Ways of Specifying the Error Terms of Econometric Models

    Directory of Open Access Journals (Sweden)

    P.A.V.B. Swamy

    2017-02-01

    Full Text Available Using the net effect of all relevant regressors omitted from a model to form its error term is incorrect because the coefficients and error term of such a model are non-unique. Non-unique coefficients cannot possess consistent estimators. Uniqueness can be achieved if; instead; one uses certain “sufficient sets” of (relevant regressors omitted from each model to represent the error term. In this case; the unique coefficient on any non-constant regressor takes the form of the sum of a bias-free component and omitted-regressor biases. Measurement-error bias can also be incorporated into this sum. We show that if our procedures are followed; accurate estimation of bias-free components is possible.

  13. #2 - An Empirical Assessment of Exposure Measurement Error ...

    Science.gov (United States)

    Background• Differing degrees of exposure error acrosspollutants• Previous focus on quantifying and accounting forexposure error in single-pollutant models• Examine exposure errors for multiple pollutantsand provide insights on the potential for bias andattenuation of effect estimates in single and bipollutantepidemiological models The National Exposure Research Laboratory (NERL) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in support of EPA mission to protect human health and the environment. HEASD research program supports Goal 1 (Clean Air) and Goal 4 (Healthy People) of EPA strategic plan. More specifically, our division conducts research to characterize the movement of pollutants from the source to contact with humans. Our multidisciplinary research program produces Methods, Measurements, and Models to identify relationships between and characterize processes that link source emissions, environmental concentrations, human exposures, and target-tissue dose. The impact of these tools is improved regulatory programs and policies for EPA.

  14. NLO error propagation exercise: statistical results

    International Nuclear Information System (INIS)

    Pack, D.J.; Downing, D.J.

    1985-09-01

    Error propagation is the extrapolation and cumulation of uncertainty (variance) above total amounts of special nuclear material, for example, uranium or 235 U, that are present in a defined location at a given time. The uncertainty results from the inevitable inexactness of individual measurements of weight, uranium concentration, 235 U enrichment, etc. The extrapolated and cumulated uncertainty leads directly to quantified limits of error on inventory differences (LEIDs) for such material. The NLO error propagation exercise was planned as a field demonstration of the utilization of statistical error propagation methodology at the Feed Materials Production Center in Fernald, Ohio from April 1 to July 1, 1983 in a single material balance area formed specially for the exercise. Major elements of the error propagation methodology were: variance approximation by Taylor Series expansion; variance cumulation by uncorrelated primary error sources as suggested by Jaech; random effects ANOVA model estimation of variance effects (systematic error); provision for inclusion of process variance in addition to measurement variance; and exclusion of static material. The methodology was applied to material balance area transactions from the indicated time period through a FORTRAN computer code developed specifically for this purpose on the NLO HP-3000 computer. This paper contains a complete description of the error propagation methodology and a full summary of the numerical results of applying the methodlogy in the field demonstration. The error propagation LEIDs did encompass the actual uranium and 235 U inventory differences. Further, one can see that error propagation actually provides guidance for reducing inventory differences and LEIDs in future time periods

  15. The Sustained Influence of an Error on Future Decision-Making.

    Science.gov (United States)

    Schiffler, Björn C; Bengtsson, Sara L; Lundqvist, Daniel

    2017-01-01

    Post-error slowing (PES) is consistently observed in decision-making tasks after negative feedback. Yet, findings are inconclusive as to whether PES supports performance accuracy. We addressed the role of PES by employing drift diffusion modeling which enabled us to investigate latent processes of reaction times and accuracy on a large-scale dataset (>5,800 participants) of a visual search experiment with emotional face stimuli. In our experiment, post-error trials were characterized by both adaptive and non-adaptive decision processes. An adaptive increase in participants' response threshold was sustained over several trials post-error. Contrarily, an initial decrease in evidence accumulation rate, followed by an increase on the subsequent trials, indicates a momentary distraction of task-relevant attention and resulted in an initial accuracy drop. Higher values of decision threshold and evidence accumulation on the post-error trial were associated with higher accuracy on subsequent trials which further gives credence to these parameters' role in post-error adaptation. Finally, the evidence accumulation rate post-error decreased when the error trial presented angry faces, a finding suggesting that the post-error decision can be influenced by the error context. In conclusion, we demonstrate that error-related response adaptations are multi-component processes that change dynamically over several trials post-error.

  16. The Sustained Influence of an Error on Future Decision-Making

    Directory of Open Access Journals (Sweden)

    Björn C. Schiffler

    2017-06-01

    Full Text Available Post-error slowing (PES is consistently observed in decision-making tasks after negative feedback. Yet, findings are inconclusive as to whether PES supports performance accuracy. We addressed the role of PES by employing drift diffusion modeling which enabled us to investigate latent processes of reaction times and accuracy on a large-scale dataset (>5,800 participants of a visual search experiment with emotional face stimuli. In our experiment, post-error trials were characterized by both adaptive and non-adaptive decision processes. An adaptive increase in participants’ response threshold was sustained over several trials post-error. Contrarily, an initial decrease in evidence accumulation rate, followed by an increase on the subsequent trials, indicates a momentary distraction of task-relevant attention and resulted in an initial accuracy drop. Higher values of decision threshold and evidence accumulation on the post-error trial were associated with higher accuracy on subsequent trials which further gives credence to these parameters’ role in post-error adaptation. Finally, the evidence accumulation rate post-error decreased when the error trial presented angry faces, a finding suggesting that the post-error decision can be influenced by the error context. In conclusion, we demonstrate that error-related response adaptations are multi-component processes that change dynamically over several trials post-error.

  17. Managing organizational errors: Three theoretical lenses on a bank collapse

    OpenAIRE

    Giolito, Vincent

    2015-01-01

    Errors have been shown to be a major source of organizational disasters, yet scant research has paid attention to the management of errors that is, what managers do once errors have occurred and how actions may determine outcomes. In an early attempt to build a theory of the management of organizational errors, this paper examines how extant theory applies to the collapse of a bank. The financial industry was chosen because of the systemic risks it entails, as demonstrated by the financial cr...

  18. Estimation of subcriticality of TCA using 'indirect estimation method for calculation error'

    International Nuclear Information System (INIS)

    Naito, Yoshitaka; Yamamoto, Toshihiro; Arakawa, Takuya; Sakurai, Kiyoshi

    1996-01-01

    To estimate the subcriticality of neutron multiplication factor in a fissile system, 'Indirect Estimation Method for Calculation Error' is proposed. This method obtains the calculational error of neutron multiplication factor by correlating measured values with the corresponding calculated ones. This method was applied to the source multiplication and to the pulse neutron experiments conducted at TCA, and the calculation error of MCNP 4A was estimated. In the source multiplication method, the deviation of measured neutron count rate distributions from the calculated ones estimates the accuracy of calculated k eff . In the pulse neutron method, the calculation errors of prompt neutron decay constants give the accuracy of the calculated k eff . (author)

  19. Disasters of endoscopic surgery and how to avoid them: error analysis.

    Science.gov (United States)

    Troidl, H

    1999-08-01

    For every innovation there are two sides to consider. For endoscopic surgery the positive side is more comfort for the patient, and the negative side is new complications, even disasters, such as injuries to organs (e.g., the bowel), vessels, and the common bile duct. These disasters are rare and seldom reported in the scientific world, as at conferences, at symposiums, and in publications. Today there are many methods for testing an innovation (controlled clinical trials, consensus conferences, audits, and confidential inquiries). Reporting "complications," however, does not help to avoid them. We need real methods for avoiding negative failures. The failure analysis is the method of choice in industry. If an airplane crashes, error analysis starts immediately. Humans make errors, and making errors means punishment. Failure analysis means rigorously and objectively investigating a clinical situation to find clinical relevant information for avoiding these negative events in the future. Error analysis has four important steps: (1) What was the clinical situation? (2) What has happened? (3) Most important: Why did it happen? (4) How do we avoid the negative event or disaster in the future. Error analysis has decisive advantages. It is easy to perform; it supplies clinically relevant information to help avoid it; and there is no need for money. It can be done everywhere; and the information is available in a short time. The other side of the coin is that error analysis is of course retrospective, it may not be objective, and most important it will probably have legal consequences. To be more effective in medicine and surgery we must handle our errors using a different approach. According to Sir Karl Popper: "The consituation is that we have to learn from our errors. To cover up failure is therefore the biggest intellectual sin.

  20. A Critical Review of Naphthalene Sources and Exposures Relevant to Indoor and Outdoor Air

    Directory of Open Access Journals (Sweden)

    Chunrong Jia

    2010-07-01

    Full Text Available Both the recent classification of naphthalene as a possible human carcinogen and its ubiquitous presence motivate this critical review of naphthalene’s sources and exposures. We evaluate the environmental literature on naphthalene published since 1990, drawing on nearly 150 studies that report emissions and concentrations in indoor, outdoor and personal air. While naphthalene is both a volatile organic compound and a polycyclic aromatic hydrocarbon, concentrations and exposures are poorly characterized relative to many other pollutants. Most airborne emissions result from combustion, and key sources include industry, open burning, tailpipe emissions, and cigarettes. The second largest source is off-gassing, specifically from naphthalene’s use as a deodorizer, repellent and fumigant. In the U.S., naphthalene’s use as a moth repellant has been reduced in favor of para-dichlorobenzene, but extensive use continues in mothballs, which appears responsible for some of the highest indoor exposures, along with off-label uses. Among the studies judged to be representative, average concentrations ranged from 0.18 to 1.7 μg m-3 in non-smoker’s homes, and from 0.02 to 0.31 μg m-3 outdoors in urban areas. Personal exposures have been reported in only three European studies. Indoor sources are the major contributor to (non-occupational exposure. While its central tendencies fall well below guideline levels relevant to acute health impacts, several studies have reported maximum concentrations exceeding 100 μg m-3, far above guideline levels. Using current but draft estimates of cancer risks, naphthalene is a major environmental risk driver, with typical individual risk levels in the 10-4 range, which is high and notable given that millions of individuals are exposed. Several factors influence indoor and outdoor concentrations, but the literature is inconsistent on their effects. Further investigation is needed to better characterize naphthalene

  1. Assessment of Aliasing Errors in Low-Degree Coefficients Inferred from GPS Data

    Directory of Open Access Journals (Sweden)

    Na Wei

    2016-05-01

    Full Text Available With sparse and uneven site distribution, Global Positioning System (GPS data is just barely able to infer low-degree coefficients in the surface mass field. The unresolved higher-degree coefficients turn out to introduce aliasing errors into the estimates of low-degree coefficients. To reduce the aliasing errors, the optimal truncation degree should be employed. Using surface displacements simulated from loading models, we theoretically prove that the optimal truncation degree should be degree 6–7 for a GPS inversion and degree 20 for combing GPS and Ocean Bottom Pressure (OBP with no additional regularization. The optimal truncation degree should be decreased to degree 4–5 for real GPS data. Additionally, we prove that a Scaled Sensitivity Matrix (SSM approach can be used to quantify the aliasing errors due to any one or any combination of unresolved higher degrees, which is beneficial to identify the major error source from among all the unresolved higher degrees. Results show that the unresolved higher degrees lower than degree 20 are the major error source for global inversion. We also theoretically prove that the SSM approach can be used to mitigate the aliasing errors in a GPS inversion, if the neglected higher degrees are well known from other sources.

  2. IPTV multicast with peer-assisted lossy error control

    Science.gov (United States)

    Li, Zhi; Zhu, Xiaoqing; Begen, Ali C.; Girod, Bernd

    2010-07-01

    Emerging IPTV technology uses source-specific IP multicast to deliver television programs to end-users. To provide reliable IPTV services over the error-prone DSL access networks, a combination of multicast forward error correction (FEC) and unicast retransmissions is employed to mitigate the impulse noises in DSL links. In existing systems, the retransmission function is provided by the Retransmission Servers sitting at the edge of the core network. In this work, we propose an alternative distributed solution where the burden of packet loss repair is partially shifted to the peer IP set-top boxes. Through Peer-Assisted Repair (PAR) protocol, we demonstrate how the packet repairs can be delivered in a timely, reliable and decentralized manner using the combination of server-peer coordination and redundancy of repairs. We also show that this distributed protocol can be seamlessly integrated with an application-layer source-aware error protection mechanism called forward and retransmitted Systematic Lossy Error Protection (SLEP/SLEPr). Simulations show that this joint PARSLEP/ SLEPr framework not only effectively mitigates the bottleneck experienced by the Retransmission Servers, thus greatly enhancing the scalability of the system, but also efficiently improves the resistance to the impulse noise.

  3. Operator error and emotions. Operator error and emotions - a major cause of human failure

    International Nuclear Information System (INIS)

    Patterson, B.K.; Bradley, M.; Artiss, W.G.

    2000-01-01

    This paper proposes the idea that a large proportion of the incidents attributed to operator and maintenance error in a nuclear or industrial plant are actually founded in our human emotions. Basic psychological theory of emotions is briefly presented and then the authors present situations and instances that can cause emotions to swell and lead to operator and maintenance error. Since emotional information is not recorded in industrial incident reports, the challenge is extended to industry, to review incident source documents for cases of emotional involvement and to develop means to collect emotion related information in future root cause analysis investigations. Training must then be provided to operators and maintainers to enable them to know one's emotions, manage emotions, motivate one's self, recognize emotions in others and handle relationships. Effective training will reduce the instances of human error based in emotions and enable a cooperative, productive environment in which to work. (author)

  4. Operator error and emotions. Operator error and emotions - a major cause of human failure

    Energy Technology Data Exchange (ETDEWEB)

    Patterson, B.K. [Human Factors Practical Incorporated (Canada); Bradley, M. [Univ. of New Brunswick, Saint John, New Brunswick (Canada); Artiss, W.G. [Human Factors Practical (Canada)

    2000-07-01

    This paper proposes the idea that a large proportion of the incidents attributed to operator and maintenance error in a nuclear or industrial plant are actually founded in our human emotions. Basic psychological theory of emotions is briefly presented and then the authors present situations and instances that can cause emotions to swell and lead to operator and maintenance error. Since emotional information is not recorded in industrial incident reports, the challenge is extended to industry, to review incident source documents for cases of emotional involvement and to develop means to collect emotion related information in future root cause analysis investigations. Training must then be provided to operators and maintainers to enable them to know one's emotions, manage emotions, motivate one's self, recognize emotions in others and handle relationships. Effective training will reduce the instances of human error based in emotions and enable a cooperative, productive environment in which to work. (author)

  5. Human error in remote Afterloading Brachytherapy

    International Nuclear Information System (INIS)

    Quinn, M.L.; Callan, J.; Schoenfeld, I.; Serig, D.

    1994-01-01

    Remote Afterloading Brachytherapy (RAB) is a medical process used in the treatment of cancer. RAB uses a computer-controlled device to remotely insert and remove radioactive sources close to a target (or tumor) in the body. Some RAB problems affecting the radiation dose to the patient have been reported and attributed to human error. To determine the root cause of human error in the RAB system, a human factors team visited 23 RAB treatment sites in the US. The team observed RAB treatment planning and delivery, interviewed RAB personnel, and performed walk-throughs, during which staff demonstrated the procedures and practices used in performing RAB tasks. Factors leading to human error in the RAB system were identified. The impact of those factors on the performance of RAB was then evaluated and prioritized in terms of safety significance. Finally, the project identified and evaluated alternative approaches for resolving the safety significant problems related to human error

  6. Sources of water vapor to economically relevant regions in Amazonia and the effect of deforestation

    Science.gov (United States)

    Pires, G. F.; Fontes, V. C.

    2017-12-01

    The Amazon rain forest helps regulate the regional humid climate. Understanding the effects of Amazon deforestation is important to preserve not only the climate, but also economic activities that depend on it, in particular, agricultural productivity and hydropower generation. This study calculates the source of water vapor contributing to the precipitation on economically relevant regions in Amazonia according to different scenarios of deforestation. These regions include the state of Mato Grosso, which produces about 9% of the global soybean production, and the basins of the Xingu and Madeira, with infrastructure under construction that will be capable to generate 20% of the electrical energy produced in Brazil. The results show that changes in rainfall after deforestation are stronger in regions nearest to the ocean and indicate the importance of the continental water vapor source to the precipitation over southern Amazonia. In the two more continental regions (Madeira and Mato Grosso), decreases in the source of water vapor in one region were offset by increases in contributions from other continental regions, whereas in the Xingu basin, which is closer to the ocean, this mechanism did not occur. As a conclusion, the geographic location of the region is an important determinant of the resiliency of the regional climate to deforestation-induced regional climate change. The more continental the geographic location, the less climate changes after deforestation.

  7. Runtime Detection of C-Style Errors in UPC Code

    Energy Technology Data Exchange (ETDEWEB)

    Pirkelbauer, P; Liao, C; Panas, T; Quinlan, D

    2011-09-29

    Unified Parallel C (UPC) extends the C programming language (ISO C 99) with explicit parallel programming support for the partitioned global address space (PGAS), which provides a global memory space with localized partitions to each thread. Like its ancestor C, UPC is a low-level language that emphasizes code efficiency over safety. The absence of dynamic (and static) safety checks allows programmer oversights and software flaws that can be hard to spot. In this paper, we present an extension of a dynamic analysis tool, ROSE-Code Instrumentation and Runtime Monitor (ROSECIRM), for UPC to help programmers find C-style errors involving the global address space. Built on top of the ROSE source-to-source compiler infrastructure, the tool instruments source files with code that monitors operations and keeps track of changes to the system state. The resulting code is linked to a runtime monitor that observes the program execution and finds software defects. We describe the extensions to ROSE-CIRM that were necessary to support UPC. We discuss complications that arise from parallel code and our solutions. We test ROSE-CIRM against a runtime error detection test suite, and present performance results obtained from running error-free codes. ROSE-CIRM is released as part of the ROSE compiler under a BSD-style open source license.

  8. Errors in the administration of intravenous medication in Brazilian hospitals.

    Science.gov (United States)

    Anselmi, Maria Luiza; Peduzzi, Marina; Dos Santos, Claudia Benedita

    2007-10-01

    To verify the frequency of errors in the preparation and administration of intravenous medication in three Brazilian hospitals in the State of Bahia. The administration of intravenous medications constitutes a central activity in Brazilian nursing. Errors in performing this activity may result in irreparable damage to patients and may compromise the quality of care. Cross-sectional study, conducted in three hospitals in the State of Bahia, Brazil. Direct observation of the nursing staff (nurse technicians, auxiliary nurses and nurse attendants), preparing and administering intravenous medication. When preparing medication, wrong patient error did not occur in any of the three hospitals, whereas omission dose was the most frequent error in all study sites. When administering medication, the most frequent errors in the three hospitals were wrong dose and omission dose. The rates of error found are considered low compared with similar studies. The most frequent types of errors were wrong dose and omission dose. The hospitals studied showed different results with the smallest rates of errors occurring in hospital 1 that presented the best working conditions. Relevance to clinical practice. Studies such as this one have the potential to improve the quality of care.

  9. Emissions of perfluorinated alkylated substances (PFAS) from point sources--identification of relevant branches.

    Science.gov (United States)

    Clara, M; Scheffknecht, C; Scharf, S; Weiss, S; Gans, O

    2008-01-01

    Effluents of wastewater treatment plants are relevant point sources for the emission of hazardous xenobiotic substances to the aquatic environment. One group of substances, which recently entered scientific and political discussions, is the group of the perfluorinated alkylated substances (PFAS). The most studied compounds from this group are perfluorooctanoic acid (PFOA) and perfluorooctane sulphonate (PFOS), which are the most important degradation products of PFAS. These two substances are known to be persistent, bioaccumulative and toxic (PBT). In the present study, eleven PFAS were investigated in effluents of municipal wastewater treatment plants (WWTP) and in industrial wastewaters. PFOS and PFOA proved to be the dominant compounds in all sampled wastewaters. Concentrations of up to 340 ng/L of PFOS and up to 220 ng/L of PFOA were observed. Besides these two compounds, perfluorohexanoic acid (PFHxA) was also present in nearly all effluents and maximum concentrations of up to 280 ng/L were measured. Only N-ethylperfluorooctane sulphonamide (N-EtPFOSA) and its degradation/metabolisation product perfluorooctane sulphonamide (PFOSA) were either detected below the limit of quantification or were not even detected at all. Beside the effluents of the municipal WWTPs, nine industrial wastewaters from six different industrial branches were also investigated. Significantly, the highest emissions or PFOS were observed from metal industry whereas paper industry showed the highest PFOA emission. Several PFAS, especially perfluorononanoic acid (PFNA), perfluorodecanoic acid (PFDA), perfluorododecanoic acid (PFDoA) and PFOS are predominantly emitted from industrial sources, with concentrations being a factor of 10 higher than those observed in the municipal WWTP effluents. Perfluorodecane sulphonate (PFDS), N-Et-PFOSA and PFOSA were not detected in any of the sampled industrial point sources. (c) IWA Publishing 2008.

  10. Medication errors with the use of allopurinol and colchicine: a retrospective study of a national, anonymous Internet-accessible error reporting system.

    Science.gov (United States)

    Mikuls, Ted R; Curtis, Jeffrey R; Allison, Jeroan J; Hicks, Rodney W; Saag, Kenneth G

    2006-03-01

    To more closely assess medication errors in gout care, we examined data from a national, Internet-accessible error reporting program over a 5-year reporting period. We examined data from the MEDMARX database, covering the period from January 1, 1999 through December 31, 2003. For allopurinol and colchicine, we examined error severity, source, type, contributing factors, and healthcare personnel involved in errors, and we detailed errors resulting in patient harm. Causes of error and the frequency of other error characteristics were compared for gout medications versus other musculoskeletal treatments using the chi-square statistic. Gout medication errors occurred in 39% (n = 273) of facilities participating in the MEDMARX program. Reported errors were predominantly from the inpatient hospital setting and related to the use of allopurinol (n = 524), followed by colchicine (n = 315), probenecid (n = 50), and sulfinpyrazone (n = 2). Compared to errors involving other musculoskeletal treatments, allopurinol and colchicine errors were more often ascribed to problems with physician prescribing (7% for other therapies versus 23-39% for allopurinol and colchicine, p < 0.0001) and less often due to problems with drug administration or nursing error (50% vs 23-27%, p < 0.0001). Our results suggest that inappropriate prescribing practices are characteristic of errors occurring with the use of allopurinol and colchicine. Physician prescribing practices are a potential target for quality improvement interventions in gout care.

  11. Error analysis for 1-1/2-loop semiscale system isothermal test data

    International Nuclear Information System (INIS)

    Feldman, E.M.; Naff, S.A.

    1975-05-01

    An error analysis was performed on the measurements made during the isothermal portion of the Semiscale Blowdown and Emergency Core Cooling (ECC) Project. A brief description of the measurement techniques employed, identification of potential sources of errors, and quantification of the errors associated with data is presented. (U.S.)

  12. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  13. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  14. Three-dimensional tomosynthetic image restoration for brachytherapy source localization

    International Nuclear Information System (INIS)

    Persons, Timothy M.

    2001-01-01

    Tomosynthetic image reconstruction allows for the production of a virtually infinite number of slices from a finite number of projection views of a subject. If the reconstructed image volume is viewed in toto, and the three-dimensional (3D) impulse response is accurately known, then it is possible to solve the inverse problem (deconvolution) using canonical image restoration methods (such as Wiener filtering or solution by conjugate gradient least squares iteration) by extension to three dimensions in either the spatial or the frequency domains. This dissertation presents modified direct and iterative restoration methods for solving the inverse tomosynthetic imaging problem in 3D. The significant blur artifact that is common to tomosynthetic reconstructions is deconvolved by solving for the entire 3D image at once. The 3D impulse response is computed analytically using a fiducial reference schema as realized in a robust, self-calibrating solution to generalized tomosynthesis. 3D modulation transfer function analysis is used to characterize the tomosynthetic resolution of the 3D reconstructions. The relevant clinical application of these methods is 3D imaging for brachytherapy source localization. Conventional localization schemes for brachytherapy implants using orthogonal or stereoscopic projection radiographs suffer from scaling distortions and poor visibility of implanted seeds, resulting in compromised source tracking (reported errors: 2-4 mm) and dosimetric inaccuracy. 3D image reconstruction (using a well-chosen projection sampling scheme) and restoration of a prostate brachytherapy phantom is used for testing. The approaches presented in this work localize source centroids with submillimeter error in two Cartesian dimensions and just over one millimeter error in the third

  15. Accounting for measurement error: a critical but often overlooked process.

    Science.gov (United States)

    Harris, Edward F; Smith, Richard N

    2009-12-01

    Due to instrument imprecision and human inconsistencies, measurements are not free of error. Technical error of measurement (TEM) is the variability encountered between dimensions when the same specimens are measured at multiple sessions. A goal of a data collection regimen is to minimise TEM. The few studies that actually quantify TEM, regardless of discipline, report that it is substantial and can affect results and inferences. This paper reviews some statistical approaches for identifying and controlling TEM. Statistically, TEM is part of the residual ('unexplained') variance in a statistical test, so accounting for TEM, which requires repeated measurements, enhances the chances of finding a statistically significant difference if one exists. The aim of this paper was to review and discuss common statistical designs relating to types of error and statistical approaches to error accountability. This paper addresses issues of landmark location, validity, technical and systematic error, analysis of variance, scaled measures and correlation coefficients in order to guide the reader towards correct identification of true experimental differences. Researchers commonly infer characteristics about populations from comparatively restricted study samples. Most inferences are statistical and, aside from concerns about adequate accounting for known sources of variation with the research design, an important source of variability is measurement error. Variability in locating landmarks that define variables is obvious in odontometrics, cephalometrics and anthropometry, but the same concerns about measurement accuracy and precision extend to all disciplines. With increasing accessibility to computer-assisted methods of data collection, the ease of incorporating repeated measures into statistical designs has improved. Accounting for this technical source of variation increases the chance of finding biologically true differences when they exist.

  16. Radon measurements: the sources of uncertainties

    International Nuclear Information System (INIS)

    Zhukovsky, Michael; Onischenko, Alexandra; Bastrikov, Vladislav

    2008-01-01

    Full text: Radon measurements are quite complicated process and the correct estimation of uncertainties is very important. The sources of uncertainties for grab sampling, short term measurements (charcoal canisters), long term measurements (track detectors) and retrospective measurements (surface traps) are analyzed. The main sources of uncertainties for grab sampling measurements are: systematic bias of reference equipment; random Poisson and non-Poisson errors during calibration; random Poisson and non-Poisson errors during measurements. These sources are also common both for short term measurements (charcoal canisters) and long term measurements (track detectors). Usually during the calibration the high radon concentrations are used (1-5 kBq/m 3 ) and the Poisson random error rarely exceed some percents. Nevertheless the dispersion of measured values even during the calibration usually exceeds the Poisson dispersion expected on the basis of counting statistic. The origins of such non-Poisson random errors during calibration are different for different kinds of instrumental measurements. At present not all sources of non-Poisson random errors are trustworthy identified. The initial calibration accuracy of working devices rarely exceeds the value 20%. The real radon concentrations usually are in the range from some tens to some hundreds Becquerel per cubic meter and for low radon levels Poisson random error can reach up to 20%. The random non-Poisson errors and residual systematic biases are depends on the kind of measurement technique and the environmental conditions during radon measurements. For charcoal canisters there are additional sources of the measurement errors due to influence of air humidity and the variations of radon concentration during the canister exposure. The accuracy of long term measurements by track detectors will depend on the quality of chemical etching after exposure and the influence of season radon variations. The main sources of

  17. Identifying afterloading PDR and HDR brachytherapy errors using real-time fiber-coupled Al2O3:C dosimetry and a novel statistical error decision criterion

    DEFF Research Database (Denmark)

    Kertzscher, Gustavo; Andersen, Claus Erik; Siebert, Frank-André

    2011-01-01

    treatment errors, including interchanged pairs of afterloader guide tubes and 2–20mm source displacements, were monitored using a real-time fiber-coupled carbon doped aluminum oxide (Al2O3:C) crystal dosimeter that was positioned in the reconstructed tumor region. The error detection capacity was evaluated...

  18. Errors and parameter estimation in precipitation-runoff modeling: 1. Theory

    Science.gov (United States)

    Troutman, Brent M.

    1985-01-01

    Errors in complex conceptual precipitation-runoff models may be analyzed by placing them into a statistical framework. This amounts to treating the errors as random variables and defining the probabilistic structure of the errors. By using such a framework, a large array of techniques, many of which have been presented in the statistical literature, becomes available to the modeler for quantifying and analyzing the various sources of error. A number of these techniques are reviewed in this paper, with special attention to the peculiarities of hydrologic models. Known methodologies for parameter estimation (calibration) are particularly applicable for obtaining physically meaningful estimates and for explaining how bias in runoff prediction caused by model error and input error may contribute to bias in parameter estimation.

  19. Standard Practice for Minimizing Dosimetry Errors in Radiation Hardness Testing of Silicon Electronic Devices Using Co-60 Sources

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This practice covers recommended procedures for the use of dosimeters, such as thermoluminescent dosimeters (TLD's), to determine the absorbed dose in a region of interest within an electronic device irradiated using a Co-60 source. Co-60 sources are commonly used for the absorbed dose testing of silicon electronic devices. Note 1—This absorbed-dose testing is sometimes called “total dose testing” to distinguish it from “dose rate testing.” Note 2—The effects of ionizing radiation on some types of electronic devices may depend on both the absorbed dose and the absorbed dose rate; that is, the effects may be different if the device is irradiated to the same absorbed-dose level at different absorbed-dose rates. Absorbed-dose rate effects are not covered in this practice but should be considered in radiation hardness testing. 1.2 The principal potential error for the measurement of absorbed dose in electronic devices arises from non-equilibrium energy deposition effects in the vicinity o...

  20. Article Errors in the English Writing of Saudi EFL Preparatory Year Students

    Science.gov (United States)

    Alhaisoni, Eid; Gaudel, Daya Ram; Al-Zuoud, Khalid M.

    2017-01-01

    This study aims at providing a comprehensive account of the types of errors produced by Saudi EFL students enrolled in the preparatory year programe in their use of articles, based on the Surface Structure Taxonomies (SST) of errors. The study describes the types, frequency and sources of the definite and indefinite article errors in writing…

  1. Intraoperative visualization and assessment of electromagnetic tracking error

    Science.gov (United States)

    Harish, Vinyas; Ungi, Tamas; Lasso, Andras; MacDonald, Andrew; Nanji, Sulaiman; Fichtinger, Gabor

    2015-03-01

    Electromagnetic tracking allows for increased flexibility in designing image-guided interventions, however it is well understood that electromagnetic tracking is prone to error. Visualization and assessment of the tracking error should take place in the operating room with minimal interference with the clinical procedure. The goal was to achieve this ideal in an open-source software implementation in a plug and play manner, without requiring programming from the user. We use optical tracking as a ground truth. An electromagnetic sensor and optical markers are mounted onto a stylus device, pivot calibrated for both trackers. Electromagnetic tracking error is defined as difference of tool tip position between electromagnetic and optical readings. Multiple measurements are interpolated into the thin-plate B-spline transform visualized in real time using 3D Slicer. All tracked devices are used in a plug and play manner through the open-source SlicerIGT and PLUS extensions of the 3D Slicer platform. Tracking error was measured multiple times to assess reproducibility of the method, both with and without placing ferromagnetic objects in the workspace. Results from exhaustive grid sampling and freehand sampling were similar, indicating that a quick freehand sampling is sufficient to detect unexpected or excessive field distortion in the operating room. The software is available as a plug-in for the 3D Slicer platforms. Results demonstrate potential for visualizing electromagnetic tracking error in real time for intraoperative environments in feasibility clinical trials in image-guided interventions.

  2. Total error vs. measurement uncertainty: revolution or evolution?

    Science.gov (United States)

    Oosterhuis, Wytze P; Theodorsson, Elvar

    2016-02-01

    The first strategic EFLM conference "Defining analytical performance goals, 15 years after the Stockholm Conference" was held in the autumn of 2014 in Milan. It maintained the Stockholm 1999 hierarchy of performance goals but rearranged them and established five task and finish groups to work on topics related to analytical performance goals including one on the "total error" theory. Jim Westgard recently wrote a comprehensive overview of performance goals and of the total error theory critical of the results and intentions of the Milan 2014 conference. The "total error" theory originated by Jim Westgard and co-workers has a dominating influence on the theory and practice of clinical chemistry but is not accepted in other fields of metrology. The generally accepted uncertainty theory, however, suffers from complex mathematics and conceived impracticability in clinical chemistry. The pros and cons of the total error theory need to be debated, making way for methods that can incorporate all relevant causes of uncertainty when making medical diagnoses and monitoring treatment effects. This development should preferably proceed not as a revolution but as an evolution.

  3. A Posteriori Error Analysis of Stochastic Differential Equations Using Polynomial Chaos Expansions

    KAUST Repository

    Butler, T.; Dawson, C.; Wildey, T.

    2011-01-01

    We develop computable a posteriori error estimates for linear functionals of a solution to a general nonlinear stochastic differential equation with random model/source parameters. These error estimates are based on a variational analysis applied to stochastic Galerkin methods for forward and adjoint problems. The result is a representation for the error estimate as a polynomial in the random model/source parameter. The advantage of this method is that we use polynomial chaos representations for the forward and adjoint systems to cheaply produce error estimates by simple evaluation of a polynomial. By comparison, the typical method of producing such estimates requires repeated forward/adjoint solves for each new choice of random parameter. We present numerical examples showing that there is excellent agreement between these methods. © 2011 Society for Industrial and Applied Mathematics.

  4. Volcanic ash modeling with the NMMB-MONARCH-ASH model: quantification of offline modeling errors

    Science.gov (United States)

    Marti, Alejandro; Folch, Arnau

    2018-03-01

    Volcanic ash modeling systems are used to simulate the atmospheric dispersion of volcanic ash and to generate forecasts that quantify the impacts from volcanic eruptions on infrastructures, air quality, aviation, and climate. The efficiency of response and mitigation actions is directly associated with the accuracy of the volcanic ash cloud detection and modeling systems. Operational forecasts build on offline coupled modeling systems in which meteorological variables are updated at the specified coupling intervals. Despite the concerns from other communities regarding the accuracy of this strategy, the quantification of the systematic errors and shortcomings associated with the offline modeling systems has received no attention. This paper employs the NMMB-MONARCH-ASH model to quantify these errors by employing different quantitative and categorical evaluation scores. The skills of the offline coupling strategy are compared against those from an online forecast considered to be the best estimate of the true outcome. Case studies are considered for a synthetic eruption with constant eruption source parameters and for two historical events, which suitably illustrate the severe aviation disruptive effects of European (2010 Eyjafjallajökull) and South American (2011 Cordón Caulle) volcanic eruptions. Evaluation scores indicate that systematic errors due to the offline modeling are of the same order of magnitude as those associated with the source term uncertainties. In particular, traditional offline forecasts employed in operational model setups can result in significant uncertainties, failing to reproduce, in the worst cases, up to 45-70 % of the ash cloud of an online forecast. These inconsistencies are anticipated to be even more relevant in scenarios in which the meteorological conditions change rapidly in time. The outcome of this paper encourages operational groups responsible for real-time advisories for aviation to consider employing computationally

  5. Random measurement error: Why worry? An example of cardiovascular risk factors.

    Science.gov (United States)

    Brakenhoff, Timo B; van Smeden, Maarten; Visseren, Frank L J; Groenwold, Rolf H H

    2018-01-01

    With the increased use of data not originally recorded for research, such as routine care data (or 'big data'), measurement error is bound to become an increasingly relevant problem in medical research. A common view among medical researchers on the influence of random measurement error (i.e. classical measurement error) is that its presence leads to some degree of systematic underestimation of studied exposure-outcome relations (i.e. attenuation of the effect estimate). For the common situation where the analysis involves at least one exposure and one confounder, we demonstrate that the direction of effect of random measurement error on the estimated exposure-outcome relations can be difficult to anticipate. Using three example studies on cardiovascular risk factors, we illustrate that random measurement error in the exposure and/or confounder can lead to underestimation as well as overestimation of exposure-outcome relations. We therefore advise medical researchers to refrain from making claims about the direction of effect of measurement error in their manuscripts, unless the appropriate inferential tools are used to study or alleviate the impact of measurement error from the analysis.

  6. Angular discretization errors in transport theory

    International Nuclear Information System (INIS)

    Nelson, P.; Yu, F.

    1992-01-01

    Elements of the information-based complexity theory are computed for several types of information and associated algorithms for angular approximations in the setting of a on-dimensional model problem. For point-evaluation information, the local and global radii of information are computed, a (trivial) optimal algorithm is determined, and the local and global error of a discrete ordinates algorithm are shown to be infinite. For average cone-integral information, the local and global radii of information are computed, the local and global error tends to zero as the underlying partition is indefinitely refined. A central algorithm for such information and an optimal partition (of given cardinality) are described. It is further shown that the analytic first-collision source method has zero error (for the purely absorbing model problem). Implications of the restricted problem domains suitable for the various types of information are discussed

  7. The Language of Scholarship: How to Rapidly Locate and Avoid Common APA Errors.

    Science.gov (United States)

    Freysteinson, Wyona M; Krepper, Rebecca; Mellott, Susan

    2015-10-01

    This article is relevant for nurses and nursing students who are writing scholarly documents for work, school, or publication and who have a basic understanding of American Psychological Association (APA) style. Common APA errors on the reference list and in citations within the text are reviewed. Methods to quickly find and reduce those errors are shared. Copyright 2015, SLACK Incorporated.

  8. Continuous quantum error correction for non-Markovian decoherence

    International Nuclear Information System (INIS)

    Oreshkov, Ognyan; Brun, Todd A.

    2007-01-01

    We study the effect of continuous quantum error correction in the case where each qubit in a codeword is subject to a general Hamiltonian interaction with an independent bath. We first consider the scheme in the case of a trivial single-qubit code, which provides useful insights into the workings of continuous error correction and the difference between Markovian and non-Markovian decoherence. We then study the model of a bit-flip code with each qubit coupled to an independent bath qubit and subject to continuous correction, and find its solution. We show that for sufficiently large error-correction rates, the encoded state approximately follows an evolution of the type of a single decohering qubit, but with an effectively decreased coupling constant. The factor by which the coupling constant is decreased scales quadratically with the error-correction rate. This is compared to the case of Markovian noise, where the decoherence rate is effectively decreased by a factor which scales only linearly with the rate of error correction. The quadratic enhancement depends on the existence of a Zeno regime in the Hamiltonian evolution which is absent in purely Markovian dynamics. We analyze the range of validity of this result and identify two relevant time scales. Finally, we extend the result to more general codes and argue that the performance of continuous error correction will exhibit the same qualitative characteristics

  9. Barriers to reporting medication errors and near misses among nurses: A systematic review.

    Science.gov (United States)

    Vrbnjak, Dominika; Denieffe, Suzanne; O'Gorman, Claire; Pajnkihar, Majda

    2016-11-01

    To explore barriers to nurses' reporting of medication errors and near misses in hospital settings. Systematic review. Medline, CINAHL, PubMed and Cochrane Library in addition to Google and Google Scholar and reference lists of relevant studies published in English between January 1981 and April 2015 were searched for relevant qualitative, quantitative or mixed methods empirical studies or unpublished PhD theses. Papers with a primary focus on barriers to reporting medication errors and near misses in nursing were included. The titles and abstracts of the search results were assessed for eligibility and relevance by one of the authors. After retrieval of the full texts, two of the authors independently made decisions concerning the final inclusion and these were validated by the third reviewer. Three authors independently assessed methodological quality of studies. Relevant data were extracted and findings were synthesised using thematic synthesis. From 4038 identified records, 38 studies were included in the synthesis. Findings suggest that organizational barriers such as culture, the reporting system and management behaviour in addition to personal and professional barriers such as fear, accountability and characteristics of nurses are barriers to reporting medication errors. To overcome reported barriers it is necessary to develop a non-blaming, non-punitive and non-fearful learning culture at unit and organizational level. Anonymous, effective, uncomplicated and efficient reporting systems and supportive management behaviour that provides open feedback to nurses is needed. Nurses are accountable for patients' safety, so they need to be educated and skilled in error management. Lack of research into barriers to reporting of near misses' and low awareness of reporting suggests the need for further research and development of educational and management approaches to overcome these barriers. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Error begat error: design error analysis and prevention in social infrastructure projects.

    Science.gov (United States)

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.

  11. On systematic and statistic errors in radionuclide mass activity estimation procedure

    International Nuclear Information System (INIS)

    Smelcerovic, M.; Djuric, G.; Popovic, D.

    1989-01-01

    One of the most important requirements during nuclear accidents is the fast estimation of the mass activity of the radionuclides that suddenly and without control reach the environment. The paper points to systematic errors in the procedures of sampling, sample preparation and measurement itself, that in high degree contribute to total mass activity evaluation error. Statistic errors in gamma spectrometry as well as in total mass alpha and beta activity evaluation are also discussed. Beside, some of the possible sources of errors in the partial mass activity evaluation for some of the radionuclides are presented. The contribution of the errors in the total mass activity evaluation error is estimated and procedures that could possibly reduce it are discussed (author)

  12. An open source Bayesian Monte Carlo isotope mixing model with applications in Earth surface processes

    Science.gov (United States)

    Arendt, Carli A.; Aciego, Sarah M.; Hetland, Eric A.

    2015-05-01

    The implementation of isotopic tracers as constraints on source contributions has become increasingly relevant to understanding Earth surface processes. Interpretation of these isotopic tracers has become more accessible with the development of Bayesian Monte Carlo (BMC) mixing models, which allow uncertainty in mixing end-members and provide methodology for systems with multicomponent mixing. This study presents an open source multiple isotope BMC mixing model that is applicable to Earth surface environments with sources exhibiting distinct end-member isotopic signatures. Our model is first applied to new δ18O and δD measurements from the Athabasca Glacier, which showed expected seasonal melt evolution trends and vigorously assessed the statistical relevance of the resulting fraction estimations. To highlight the broad applicability of our model to a variety of Earth surface environments and relevant isotopic systems, we expand our model to two additional case studies: deriving melt sources from δ18O, δD, and 222Rn measurements of Greenland Ice Sheet bulk water samples and assessing nutrient sources from ɛNd and 87Sr/86Sr measurements of Hawaiian soil cores. The model produces results for the Greenland Ice Sheet and Hawaiian soil data sets that are consistent with the originally published fractional contribution estimates. The advantage of this method is that it quantifies the error induced by variability in the end-member compositions, unrealized by the models previously applied to the above case studies. Results from all three case studies demonstrate the broad applicability of this statistical BMC isotopic mixing model for estimating source contribution fractions in a variety of Earth surface systems.

  13. Size, Composition, and Sources of Health Relevant Particulate Matter in the San Joaquin Valley

    Science.gov (United States)

    Ham, Walter Allan

    Particulate Matter (PM) is an environment contaminant that has been associated with adverse health effects in epidemiological and toxicological studies. Atmospheric PM is made up of a diverse array of chemical species that are emitted from multiple sources across a range of aerodynamic diameters spanning several orders of magnitude. The focus of the present work was the characterization of ambient PM with aerodynamic diameters below 1.8 mum (PM1.8) in 6 size sub-fractions including PM0.1. Chemical species measured included organic carbon, elemental carbon, water soluble ions, trace metals, and organic molecular markers in urban and rural environments in the San Joaquin Valley. These measurements were used to determine differences in relative diurnal size distributions during a severe winter stagnation event, seasonal changes in PM size and composition, and the source origin of carbonaceous PM. This size-resolved information was used to calculate lung deposition patterns of health relevant PM species to evaluate seasonal differences in PM dose. By accurately calculating PM dose, researchers are able to more directly link ambient PM characterization data with biological endpoints. All of these results are used to support ongoing toxicological health effects studies. These types of analyses are important as this type of information may assist regulators with developing control strategies to reduce health effects caused by particulate air pollution.

  14. Mismeasurement and the resonance of strong confounders: correlated errors.

    Science.gov (United States)

    Marshall, J R; Hastrup, J L; Ross, J S

    1999-07-01

    Confounding in epidemiology, and the limits of standard methods of control for an imperfectly measured confounder, have been understood for some time. However, most treatments of this problem are based on the assumption that errors of measurement in confounding and confounded variables are independent. This paper considers the situation in which a strong risk factor (confounder) and an inconsequential but suspected risk factor (confounded) are each measured with errors that are correlated; the situation appears especially likely to occur in the field of nutritional epidemiology. Error correlation appears to add little to measurement error as a source of bias in estimating the impact of a strong risk factor: it can add to, diminish, or reverse the bias induced by measurement error in estimating the impact of the inconsequential risk factor. Correlation of measurement errors can add to the difficulty involved in evaluating structures in which confounding and measurement error are present. In its presence, observed correlations among risk factors can be greater than, less than, or even opposite to the true correlations. Interpretation of multivariate epidemiologic structures in which confounding is likely requires evaluation of measurement error structures, including correlations among measurement errors.

  15. Identifying medication error chains from critical incident reports: a new analytic approach.

    Science.gov (United States)

    Huckels-Baumgart, Saskia; Manser, Tanja

    2014-10-01

    Research into the distribution of medication errors usually focuses on isolated stages within the medication use process. Our study aimed to provide a novel process-oriented approach to medication incident analysis focusing on medication error chains. Our study was conducted across a 900-bed teaching hospital in Switzerland. All reported 1,591 medication errors 2009-2012 were categorized using the Medication Error Index NCC MERP and the WHO Classification for Patient Safety Methodology. In order to identify medication error chains, each reported medication incident was allocated to the relevant stage of the hospital medication use process. Only 25.8% of the reported medication errors were detected before they propagated through the medication use process. The majority of medication errors (74.2%) formed an error chain encompassing two or more stages. The most frequent error chain comprised preparation up to and including medication administration (45.2%). "Non-consideration of documentation/prescribing" during the drug preparation was the most frequent contributor for "wrong dose" during the administration of medication. Medication error chains provide important insights for detecting and stopping medication errors before they reach the patient. Existing and new safety barriers need to be extended to interrupt error chains and to improve patient safety. © 2014, The American College of Clinical Pharmacology.

  16. Sources of error in etched-track radon measurements and a review of passive detectors using results from a series of radon intercomparisons

    International Nuclear Information System (INIS)

    Ibrahimi, Z.-F.; Howarth, C.B.; Miles, J.C.H.

    2009-01-01

    Etched-track passive radon detectors are a well established and apparently simple technology. As with any measurement system, there are multiple sources of uncertainty and potential for error. The authors discuss these as well as good quality assurance practices. Identification and assessment of sources of error is crucial to maintain high quality standards by a measurement laboratory. These sources can be found both within and outside the radon measurement laboratory itself. They can lead to changes in track characteristics and ultimately detector response to radon exposure. Changes don't just happen during etching, but can happen during the recording or counting of etched-tracks (for example ageing and fading effects on track sensitivity, or focus and image acquisition variables). Track overlap means the linearity of response of detectors will vary as exposure increases. The laboratory needs to correct the calibration curve due to this effect if it wishes to offer detectors that cover a range of exposures likely to be observed in the field. Extrapolation of results to estimate annual average concentrations also has uncertainty associated with it. Measurement systems need to be robust, reliable and stable. If a laboratory is not actively and constantly monitoring for anomalies via internal testing, the laboratory may not become aware of a problem until some form of external testing occurs, eg an accreditation process, performance test, interlaboratory comparison exercise or when a customer has cause to query results. Benchmark standards of accuracy and precision achievable with passive detectors are discussed drawing on trends from the series of intercomparison exercises for passive radon detectors which began in 1982, organised by the National Radiological Protection Board (NRPB), subsequently the Health Protection Agency (HPA).

  17. A dozen useful tips on how to minimise the influence of sources of error in quantitative electron paramagnetic resonance (EPR) spectroscopy-A review

    International Nuclear Information System (INIS)

    Mazur, Milan

    2006-01-01

    The principal and the most important error sources in quantitative electron paramagnetic resonance (EPR) measurements arising from sample-associated factors are the influence of the variation of the sample material (dielectric constant), sample size and shape, sample tube wall thickness, and sample orientation and positioning within the microwave cavity on the EPR signal intensity. Variation in these parameters can cause significant and serious errors in the primary phase of quantitative EPR analysis (i.e., data acquisition). The primary aim of this review is to provide useful suggestions, recommendations and simple procedures to minimise the influence of such primary error sources in quantitative EPR measurements. According to the literature, as well as results obtained in our EPR laboratory, the following are recommendations for samples, which are compared in quantitative EPR studies: (i) the shape of all samples should be identical; (ii) the position of the sample/reference in the cavity should be identical; (iii) a special alignment procedure for precise sample positioning within the cavity should be adopted; (iv) a special/consistent procedure for sample packing for a powder material should be used; (v) the wall thickness of sample tubes should be identical; (vi) the shape and wall thickness of quartz Dewars, where used, should be identical; (vii) where possible a double TE 104 cavity should be used in quantitative EPR spectroscopy; (viii) the dielectric properties of unknown and standard samples should be as close as possible; (ix) sample length less than double the cavity length should be used; (x) the optimised sample geometry for the X-band cavity is a 30 mm-length capillary with i.d. less then 1.5 mm; (xi) use of commercially distributed software for post-recording spectra manipulation is a basic necessity; and (xii) the sample and laboratory temperature should be kept constant during measurements. When the above recommendations and procedures were used

  18. Clinical errors and medical negligence.

    Science.gov (United States)

    Oyebode, Femi

    2013-01-01

    This paper discusses the definition, nature and origins of clinical errors including their prevention. The relationship between clinical errors and medical negligence is examined as are the characteristics of litigants and events that are the source of litigation. The pattern of malpractice claims in different specialties and settings is examined. Among hospitalized patients worldwide, 3-16% suffer injury as a result of medical intervention, the most common being the adverse effects of drugs. The frequency of adverse drug effects appears superficially to be higher in intensive care units and emergency departments but once rates have been corrected for volume of patients, comorbidity of conditions and number of drugs prescribed, the difference is not significant. It is concluded that probably no more than 1 in 7 adverse events in medicine result in a malpractice claim and the factors that predict that a patient will resort to litigation include a prior poor relationship with the clinician and the feeling that the patient is not being kept informed. Methods for preventing clinical errors are still in their infancy. The most promising include new technologies such as electronic prescribing systems, diagnostic and clinical decision-making aids and error-resistant systems. Copyright © 2013 S. Karger AG, Basel.

  19. Learning mechanisms to limit medication administration errors.

    Science.gov (United States)

    Drach-Zahavy, Anat; Pud, Dorit

    2010-04-01

    This paper is a report of a study conducted to identify and test the effectiveness of learning mechanisms applied by the nursing staff of hospital wards as a means of limiting medication administration errors. Since the influential report ;To Err Is Human', research has emphasized the role of team learning in reducing medication administration errors. Nevertheless, little is known about the mechanisms underlying team learning. Thirty-two hospital wards were randomly recruited. Data were collected during 2006 in Israel by a multi-method (observations, interviews and administrative data), multi-source (head nurses, bedside nurses) approach. Medication administration error was defined as any deviation from procedures, policies and/or best practices for medication administration, and was identified using semi-structured observations of nurses administering medication. Organizational learning was measured using semi-structured interviews with head nurses, and the previous year's reported medication administration errors were assessed using administrative data. The interview data revealed four learning mechanism patterns employed in an attempt to learn from medication administration errors: integrated, non-integrated, supervisory and patchy learning. Regression analysis results demonstrated that whereas the integrated pattern of learning mechanisms was associated with decreased errors, the non-integrated pattern was associated with increased errors. Supervisory and patchy learning mechanisms were not associated with errors. Superior learning mechanisms are those that represent the whole cycle of team learning, are enacted by nurses who administer medications to patients, and emphasize a system approach to data analysis instead of analysis of individual cases.

  20. Energy-Water Nexus Relevant to Baseload Electricity Source Including Mini/Micro Hydropower Generation

    Science.gov (United States)

    Fujii, M.; Tanabe, S.; Yamada, M.

    2014-12-01

    Water, food and energy is three sacred treasures that are necessary for human beings. However, recent factors such as population growth and rapid increase in energy consumption have generated conflicting cases between water and energy. For example, there exist conflicts caused by enhanced energy use, such as between hydropower generation and riverine ecosystems and service water, between shale gas and ground water, between geothermal and hot spring water. This study aims to provide quantitative guidelines necessary for capacity building among various stakeholders to minimize water-energy conflicts in enhancing energy use. Among various kinds of renewable energy sources, we target baseload sources, especially focusing on renewable energy of which installation is required socially not only to reduce CO2 and other greenhouse gas emissions but to stimulate local economy. Such renewable energy sources include micro/mini hydropower and geothermal. Three municipalities in Japan, Beppu City, Obama City and Otsuchi Town are selected as primary sites of this study. Based on the calculated potential supply and demand of micro/mini hydropower generation in Beppu City, for example, we estimate the electricity of tens through hundreds of households is covered by installing new micro/mini hydropower generation plants along each river. However, the result is based on the existing infrastructures such as roads and electric lines. This means that more potentials are expected if the local society chooses options that enhance the infrastructures to increase micro/mini hydropower generation plants. In addition, further capacity building in the local society is necessary. In Japan, for example, regulations by the river law and irrigation right restrict new entry by actors to the river. Possible influences to riverine ecosystems in installing new micro/mini hydropower generation plants should also be well taken into account. Deregulation of the existing laws relevant to rivers and

  1. A Conceptual Framework for Predicting Error in Complex Human-Machine Environments

    Science.gov (United States)

    Freed, Michael; Remington, Roger; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    We present a Goals, Operators, Methods, and Selection Rules-Model Human Processor (GOMS-MHP) style model-based approach to the problem of predicting human habit capture errors. Habit captures occur when the model fails to allocate limited cognitive resources to retrieve task-relevant information from memory. Lacking the unretrieved information, decision mechanisms act in accordance with implicit default assumptions, resulting in error when relied upon assumptions prove incorrect. The model helps interface designers identify situations in which such failures are especially likely.

  2. Computer input devices: neutral party or source of significant error in manual lesion segmentation?

    Science.gov (United States)

    Chen, James Y; Seagull, F Jacob; Nagy, Paul; Lakhani, Paras; Melhem, Elias R; Siegel, Eliot L; Safdar, Nabile M

    2011-02-01

    Lesion segmentation involves outlining the contour of an abnormality on an image to distinguish boundaries between normal and abnormal tissue and is essential to track malignant and benign disease in medical imaging for clinical, research, and treatment purposes. A laser optical mouse and a graphics tablet were used by radiologists to segment 12 simulated reference lesions per subject in two groups (one group comprised three lesion morphologies in two sizes, one for each input device for each device two sets of six, composed of three morphologies in two sizes each). Time for segmentation was recorded. Subjects completed an opinion survey following segmentation. Error in contour segmentation was calculated using root mean square error. Error in area of segmentation was calculated compared to the reference lesion. 11 radiologists segmented a total of 132 simulated lesions. Overall error in contour segmentation was less with the graphics tablet than with the mouse (P Error in area of segmentation was not significantly different between the tablet and the mouse (P = 0.62). Time for segmentation was less with the tablet than the mouse (P = 0.011). All subjects preferred the graphics tablet for future segmentation (P = 0.011) and felt subjectively that the tablet was faster, easier, and more accurate (P = 0.0005). For purposes in which accuracy in contour of lesion segmentation is of the greater importance, the graphics tablet is superior to the mouse in accuracy with a small speed benefit. For purposes in which accuracy of area of lesion segmentation is of greater importance, the graphics tablet and mouse are equally accurate.

  3. Basic considerations in predicting error probabilities in human task performance

    International Nuclear Information System (INIS)

    Fleishman, E.A.; Buffardi, L.C.; Allen, J.A.; Gaskins, R.C. III

    1990-04-01

    It is well established that human error plays a major role in the malfunctioning of complex systems. This report takes a broad look at the study of human error and addresses the conceptual, methodological, and measurement issues involved in defining and describing errors in complex systems. In addition, a review of existing sources of human reliability data and approaches to human performance data base development is presented. Alternative task taxonomies, which are promising for establishing the comparability on nuclear and non-nuclear tasks, are also identified. Based on such taxonomic schemes, various data base prototypes for generalizing human error rates across settings are proposed. 60 refs., 3 figs., 7 tabs

  4. Effects of human errors on the determination of surveillance test interval

    International Nuclear Information System (INIS)

    Chung, Dae Wook; Koo, Bon Hyun

    1990-01-01

    This paper incorporates the effects of human error relevant to the periodic test on the unavailability of the safety system as well as the component unavailability. Two types of possible human error during the test are considered. One is the possibility that a good safety system is inadvertently left in a bad state after the test (Type A human error) and the other is the possibility that bad safety system is undetected upon the test (Type B human error). An event tree model is developed for the steady-state unavailability of safety system to determine the effects of human errors on the component unavailability and the test interval. We perform the reliability analysis of safety injection system (SIS) by applying aforementioned two types of human error to safety injection pumps. Results of various sensitivity analyses show that; 1) the appropriate test interval decreases and steady-state unavailability increases as the probabilities of both types of human errors increase, and they are far more sensitive to Type A human error than Type B and 2) the SIS unavailability increases slightly as the probability of Type B human error increases, and significantly as the probability of Type A human error increases. Therefore, to avoid underestimation, the effects of human error should be incorporated in the system reliability analysis which aims at the relaxations of the surveillance test intervals, and Type A human error has more important effect on the unavailability and surveillance test interval

  5. Sub-nanometer periodic nonlinearity error in absolute distance interferometers

    Science.gov (United States)

    Yang, Hongxing; Huang, Kaiqi; Hu, Pengcheng; Zhu, Pengfei; Tan, Jiubin; Fan, Zhigang

    2015-05-01

    Periodic nonlinearity which can result in error in nanometer scale has become a main problem limiting the absolute distance measurement accuracy. In order to eliminate this error, a new integrated interferometer with non-polarizing beam splitter is developed. This leads to disappearing of the frequency and/or polarization mixing. Furthermore, a strict requirement on the laser source polarization is highly reduced. By combining retro-reflector and angel prism, reference and measuring beams can be spatially separated, and therefore, their optical paths are not overlapped. So, the main cause of the periodic nonlinearity error, i.e., the frequency and/or polarization mixing and leakage of beam, is eliminated. Experimental results indicate that the periodic phase error is kept within 0.0018°.

  6. A New Paradigm for Diagnosing Contributions to Model Aerosol Forcing Error

    Science.gov (United States)

    Jones, A. L.; Feldman, D. R.; Freidenreich, S.; Paynter, D.; Ramaswamy, V.; Collins, W. D.; Pincus, R.

    2017-12-01

    A new paradigm in benchmark absorption-scattering radiative transfer is presented that enables both the globally averaged and spatially resolved testing of climate model radiation parameterizations in order to uncover persistent sources of biases in the aerosol instantaneous radiative effect (IRE). A proof of concept is demonstrated with the Geophysical Fluid Dynamics Laboratory AM4 and Community Earth System Model 1.2.2 climate models. Instead of prescribing atmospheric conditions and aerosols, as in prior intercomparisons, native snapshots of the atmospheric state and aerosol optical properties from the participating models are used as inputs to an accurate radiation solver to uncover model-relevant biases. These diagnostic results show that the models' aerosol IRE bias is of the same magnitude as the persistent range cited ( 1 W/m2) and also varies spatially and with intrinsic aerosol optical properties. The findings underscore the significance of native model error analysis and its dispositive ability to diagnose global biases, confirming its fundamental value for the Radiative Forcing Model Intercomparison Project.

  7. Errors of isotope conveyor weigher caused by profile variations and shift of material

    International Nuclear Information System (INIS)

    Machaj, B.

    1977-01-01

    Results of investigations of isotope conveyor weigher in transmission geometry and with long plastic scintillator as a detector are presented in the paper. The results indicate that errors caused by material shift across the conveyor belt can be decreased by forming probe sensitivity to incident radiation along its axis by means of additional radiation absorbents. The errors caused by material profile variations can effectively be diminished by increase of photon energy. Application of 60 Co instead of 137 Cs ensured more than three times lower errors caused by profile variation. Errors caused by vertical movements of the belt with material, decrease considerably, when single point source situated in the center of the measuring head is replaced at least by two point sources situated out of the center, above the edges of the belt. (author)

  8. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    Science.gov (United States)

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Localization of extended brain sources from EEG/MEG: the ExSo-MUSIC approach.

    Science.gov (United States)

    Birot, Gwénaël; Albera, Laurent; Wendling, Fabrice; Merlet, Isabelle

    2011-05-01

    We propose a new MUSIC-like method, called 2q-ExSo-MUSIC (q ≥ 1). This method is an extension of the 2q-MUSIC (q ≥ 1) approach for solving the EEG/MEG inverse problem, when spatially-extended neocortical sources ("ExSo") are considered. It introduces a novel ExSo-MUSIC principle. The novelty is two-fold: i) the parameterization of the spatial source distribution that leads to an appropriate metric in the context of distributed brain sources and ii) the introduction of an original, efficient and low-cost way of optimizing this metric. In 2q-ExSo-MUSIC, the possible use of higher order statistics (q ≥ 2) offers a better robustness with respect to Gaussian noise of unknown spatial coherence and modeling errors. As a result we reduced the penalizing effects of both the background cerebral activity that can be seen as a Gaussian and spatially correlated noise, and the modeling errors induced by the non-exact resolution of the forward problem. Computer results on simulated EEG signals obtained with physiologically-relevant models of both the sources and the volume conductor show a highly increased performance of our 2q-ExSo-MUSIC method as compared to the classical 2q-MUSIC algorithms. Copyright © 2011 Elsevier Inc. All rights reserved.

  10. Generalizing human error rates: A taxonomic approach

    International Nuclear Information System (INIS)

    Buffardi, L.; Fleishman, E.; Allen, J.

    1989-01-01

    It is well established that human error plays a major role in malfunctioning of complex, technological systems and in accidents associated with their operation. Estimates of the rate of human error in the nuclear industry range from 20-65% of all system failures. In response to this, the Nuclear Regulatory Commission has developed a variety of techniques for estimating human error probabilities for nuclear power plant personnel. Most of these techniques require the specification of the range of human error probabilities for various tasks. Unfortunately, very little objective performance data on error probabilities exist for nuclear environments. Thus, when human reliability estimates are required, for example in computer simulation modeling of system reliability, only subjective estimates (usually based on experts' best guesses) can be provided. The objective of the current research is to provide guidelines for the selection of human error probabilities based on actual performance data taken in other complex environments and applying them to nuclear settings. A key feature of this research is the application of a comprehensive taxonomic approach to nuclear and non-nuclear tasks to evaluate their similarities and differences, thus providing a basis for generalizing human error estimates across tasks. In recent years significant developments have occurred in classifying and describing tasks. Initial goals of the current research are to: (1) identify alternative taxonomic schemes that can be applied to tasks, and (2) describe nuclear tasks in terms of these schemes. Three standardized taxonomic schemes (Ability Requirements Approach, Generalized Information-Processing Approach, Task Characteristics Approach) are identified, modified, and evaluated for their suitability in comparing nuclear and non-nuclear power plant tasks. An agenda for future research and its relevance to nuclear power plant safety is also discussed

  11. Random measurement error: Why worry? An example of cardiovascular risk factors.

    Directory of Open Access Journals (Sweden)

    Timo B Brakenhoff

    Full Text Available With the increased use of data not originally recorded for research, such as routine care data (or 'big data', measurement error is bound to become an increasingly relevant problem in medical research. A common view among medical researchers on the influence of random measurement error (i.e. classical measurement error is that its presence leads to some degree of systematic underestimation of studied exposure-outcome relations (i.e. attenuation of the effect estimate. For the common situation where the analysis involves at least one exposure and one confounder, we demonstrate that the direction of effect of random measurement error on the estimated exposure-outcome relations can be difficult to anticipate. Using three example studies on cardiovascular risk factors, we illustrate that random measurement error in the exposure and/or confounder can lead to underestimation as well as overestimation of exposure-outcome relations. We therefore advise medical researchers to refrain from making claims about the direction of effect of measurement error in their manuscripts, unless the appropriate inferential tools are used to study or alleviate the impact of measurement error from the analysis.

  12. Locating single-point sources from arrival times containing large picking errors (LPEs): the virtual field optimization method (VFOM)

    Science.gov (United States)

    Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun

    2016-01-01

    Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.

  13. 41 CFR 101-26.310 - Ordering errors.

    Science.gov (United States)

    2010-07-01

    ... 41 Public Contracts and Property Management 2 2010-07-01 2010-07-01 true Ordering errors. 101-26.310 Section 101-26.310 Public Contracts and Property Management Federal Property Management Regulations System FEDERAL PROPERTY MANAGEMENT REGULATIONS SUPPLY AND PROCUREMENT 26-PROCUREMENT SOURCES AND...

  14. An experimental approach to validating a theory of human error in complex systems

    Science.gov (United States)

    Morris, N. M.; Rouse, W. B.

    1985-01-01

    The problem of 'human error' is pervasive in engineering systems in which the human is involved. In contrast to the common engineering approach of dealing with error probabilistically, the present research seeks to alleviate problems associated with error by gaining a greater understanding of causes and contributing factors from a human information processing perspective. The general approach involves identifying conditions which are hypothesized to contribute to errors, and experimentally creating the conditions in order to verify the hypotheses. The conceptual framework which serves as the basis for this research is discussed briefly, followed by a description of upcoming research. Finally, the potential relevance of this research to design, training, and aiding issues is discussed.

  15. Quality of IT service delivery — Analysis and framework for human error prevention

    KAUST Repository

    Shwartz, L.

    2010-12-01

    In this paper, we address the problem of reducing the occurrence of Human Errors that cause service interruptions in IT Service Support and Delivery operations. Analysis of a large volume of service interruption records revealed that more than 21% of interruptions were caused by human error. We focus on Change Management, the process with the largest risk of human error, and identify the main instances of human errors as the 4 Wrongs: request, time, configuration item, and command. Analysis of change records revealed that the humanerror prevention by partial automation is highly relevant. We propose the HEP Framework, a framework for execution of IT Service Delivery operations that reduces human error by addressing the 4 Wrongs using content integration, contextualization of operation patterns, partial automation of command execution, and controlled access to resources.

  16. On the meniscus formation and the negative hydrogen ion extraction from ITER neutral beam injection relevant ion source

    International Nuclear Information System (INIS)

    Mochalskyy, S; Wünderlich, D; Ruf, B; Fantz, U; Franzen, P; Minea, T

    2014-01-01

    The development of a large area (A source,ITER  = 0.9 × 2 m 2 ) hydrogen negative ion (NI) source constitutes a crucial step in construction of the neutral beam injectors of the international fusion reactor ITER. To understand the plasma behaviour in the boundary layer close to the extraction system the 3D PIC MCC code ONIX is exploited. Direct cross checked analysis of the simulation and experimental results from the ITER-relevant BATMAN source testbed with a smaller area (A source,BATMAN  ≈ 0.32 × 0.59 m 2 ) has been conducted for a low perveance beam, but for a full set of plasma parameters available. ONIX has been partially benchmarked by comparison to the results obtained using the commercial particle tracing code for positive ion extraction KOBRA3D. Very good agreement has been found in terms of meniscus position and its shape for simulations of different plasma densities. The influence of the initial plasma composition on the final meniscus structure was then investigated for NIs. As expected from the Child–Langmuir law, the results show that not only does the extraction potential play a crucial role on the meniscus formation, but also the initial plasma density and its electronegativity. For the given parameters, the calculated meniscus locates a few mm downstream of the plasma grid aperture provoking a direct NI extraction. Most of the surface produced NIs do not reach the plasma bulk, but move directly towards the extraction grid guided by the extraction field. Even for artificially increased electronegativity of the bulk plasma the extracted NI current from this region is low. This observation indicates a high relevance of the direct NI extraction. These calculations show that the extracted NI current from the bulk region is low even if a complete ion–ion plasma is assumed, meaning that direct extraction from surface produced ions should be present in order to obtain sufficiently high extracted NI current density. The calculated

  17. Accounting for measurement error in biomarker data and misclassification of subtypes in the analysis of tumor data.

    Science.gov (United States)

    Nevo, Daniel; Zucker, David M; Tamimi, Rulla M; Wang, Molin

    2016-12-30

    A common paradigm in dealing with heterogeneity across tumors in cancer analysis is to cluster the tumors into subtypes using marker data on the tumor, and then to analyze each of the clusters separately. A more specific target is to investigate the association between risk factors and specific subtypes and to use the results for personalized preventive treatment. This task is usually carried out in two steps-clustering and risk factor assessment. However, two sources of measurement error arise in these problems. The first is the measurement error in the biomarker values. The second is the misclassification error when assigning observations to clusters. We consider the case with a specified set of relevant markers and propose a unified single-likelihood approach for normally distributed biomarkers. As an alternative, we consider a two-step procedure with the tumor type misclassification error taken into account in the second-step risk factor analysis. We describe our method for binary data and also for survival analysis data using a modified version of the Cox model. We present asymptotic theory for the proposed estimators. Simulation results indicate that our methods significantly lower the bias with a small price being paid in terms of variance. We present an analysis of breast cancer data from the Nurses' Health Study to demonstrate the utility of our method. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. A general approach to error propagation

    International Nuclear Information System (INIS)

    Sanborn, J.B.

    1987-01-01

    A computational approach to error propagation is explained. It is shown that the application of the first-order Taylor theory to a fairly general expression representing an inventory or inventory-difference quantity leads naturally to a data structure that is useful for structuring error-propagation calculations. This data structure incorporates six types of data entities: (1) the objects in the material balance, (2) numerical parameters that describe these objects, (3) groups or sets of objects, (4) the terms which make up the material-balance equation, (5) the errors or sources of variance and (6) the functions or subroutines that represent Taylor partial derivatives. A simple algorithm based on this data structure can be defined using formulas that are sums of squares of sums. The data structures and algorithms described above have been implemented as computer software in FORTRAN for IBM PC-type machines. A free-form data-entry format allows users to separate data as they wish into separate files and enter data using a text editor. The program has been applied to the computation of limits of error for inventory differences (LEIDs) within the DOE complex. 1 ref., 3 figs

  19. Medication administration errors in an intensive care unit in Ethiopia

    Directory of Open Access Journals (Sweden)

    Agalu Asrat

    2012-05-01

    Full Text Available Abstract Background Medication administration errors in patient care have been shown to be frequent and serious. Such errors are particularly prevalent in highly technical specialties such as the intensive care unit (ICU. In Ethiopia, the prevalence of medication administration errors in the ICU is not studied. Objective To assess medication administration errors in the intensive care unit of Jimma University Specialized Hospital (JUSH, Southwest Ethiopia. Methods Prospective observation based cross-sectional study was conducted in the ICU of JUSH from February 7 to March 24, 2011. All medication interventions administered by the nurses to all patients admitted to the ICU during the study period were included in the study. Data were collected by directly observing drug administration by the nurses supplemented with review of medication charts. Data was edited, coded and entered in to SPSS for windows version 16.0. Descriptive statistics was used to measure the magnitude and type of the problem under study. Results Prevalence of medication administration errors in the ICU of JUSH was 621 (51.8%. Common administration errors were attributed to wrong timing (30.3%, omission due to unavailability (29.0% and missed doses (18.3% among others. Errors associated with antibiotics took the lion's share in medication administration errors (36.7%. Conclusion Medication errors at the administration phase were highly prevalent in the ICU of Jimma University Specialized Hospital. Supervision to the nurses administering medications by more experienced ICU nurses or other relevant professionals in regular intervals is helpful in ensuring that medication errors don’t occur as frequently as observed in this study.

  20. Review of advances in human reliability analysis of errors of commission, Part 1: EOC identification

    International Nuclear Information System (INIS)

    Reer, Bernhard

    2008-01-01

    In close connection with examples relevant to contemporary probabilistic safety assessment (PSA), a review of advances in human reliability analysis (HRA) of post-initiator errors of commission (EOCs), i.e. inappropriate actions under abnormal operating conditions, has been carried out. The review comprises both EOC identification (part 1) and quantification (part 2); part 1 is presented in this article. Emerging HRA methods addressing the problem of EOC identification are: A Technique for Human Event Analysis (ATHEANA), the EOC HRA method developed by Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS), the Misdiagnosis Tree Analysis (MDTA) method, and the Commission Errors Search and Assessment (CESA) method. Most of the EOCs referred to in predictive studies comprise the stop of running or the inhibition of anticipated functions; a few comprise the start of a function. The CESA search scheme-which proceeds from possible operator actions to the affected systems to scenarios and uses procedures and importance measures as key sources of input information-provides a formalized way for identifying relatively important scenarios with EOC opportunities. In the implementation however, attention should be paid regarding EOCs associated with familiar but non-procedural actions and EOCs leading to failures of manually initiated safety functions

  1. Attentional capture by irrelevant transients leads to perceptual errors in a competitive change detection task

    Directory of Open Access Journals (Sweden)

    Daniel eSchneider

    2012-05-01

    Full Text Available Theories on visual change detection imply that attention is a necessary but not sufficient prerequisite for aware perception. Misguidance of attention due to salient irrelevant distractors can therefore lead to severe deficits in change detection. The present study investigates the mechanisms behind such perceptual errors and their relation to error processing on higher cognitive levels. Participants had to detect a luminance change that occasionally occurred simultaneously with an irrelevant orientation change in the opposite hemi-field (conflict condition. By analyzing event-related potentials in the EEG separately in those error prone conflict trials for correct and erroneous change detection, we demonstrate that only correct change detection was associated with the allocation of attention to the relevant luminance change. Erroneous change detection was associated with an initial capture of attention towards the irrelevant orientation change in the N1 time window and a lack of subsequent target selection processes (N2pc. Errors were additionally accompanied by an increase of the fronto-central N2 and a kind of error negativity (Ne or ERN, which, however, peaked prior to the response. These results suggest that a strong perceptual conflict by salient distractors can disrupt the further processing of relevant information and thus affect its aware perception. Yet, it does not impair higher cognitive processes for conflict and error detection, indicating that these processes are independent from awareness.

  2. A Preliminary ZEUS Lightning Location Error Analysis Using a Modified Retrieval Theory

    Science.gov (United States)

    Elander, Valjean; Koshak, William; Phanord, Dieudonne

    2004-01-01

    The ZEUS long-range VLF arrival time difference lightning detection network now covers both Europe and Africa, and there are plans for further expansion into the western hemisphere. In order to fully optimize and assess ZEUS lightning location retrieval errors and to determine the best placement of future receivers expected to be added to the network, a software package is being developed jointly between the NASA Marshall Space Flight Center (MSFC) and the University of Nevada Las Vegas (UNLV). The software package, called the ZEUS Error Analysis for Lightning (ZEAL), will be used to obtain global scale lightning location retrieval error maps using both a Monte Carlo approach and chi-squared curvature matrix theory. At the core of ZEAL will be an implementation of an Iterative Oblate (IO) lightning location retrieval method recently developed at MSFC. The IO method will be appropriately modified to account for variable wave propagation speed, and the new retrieval results will be compared with the current ZEUS retrieval algorithm to assess potential improvements. In this preliminary ZEAL work effort, we defined 5000 source locations evenly distributed across the Earth. We then used the existing (as well as potential future ZEUS sites) to simulate arrival time data between source and ZEUS site. A total of 100 sources were considered at each of the 5000 locations, and timing errors were selected from a normal distribution having a mean of 0 seconds and a standard deviation of 20 microseconds. This simulated "noisy" dataset was analyzed using the IO algorithm to estimate source locations. The exact locations were compared with the retrieved locations, and the results are summarized via several color-coded "error maps."

  3. MO-FG-202-07: Real-Time EPID-Based Detection Metric For VMAT Delivery Errors

    International Nuclear Information System (INIS)

    Passarge, M; Fix, M K; Manser, P; Stampanoni, M F M; Siebers, J V

    2016-01-01

    Purpose: To create and test an accurate EPID-frame-based VMAT QA metric to detect gross dose errors in real-time and to provide information about the source of error. Methods: A Swiss cheese model was created for an EPID-based real-time QA process. The system compares a treatmentplan- based reference set of EPID images with images acquired over each 2° gantry angle interval. The metric utilizes a sequence of independent consecutively executed error detection Methods: a masking technique that verifies infield radiation delivery and ensures no out-of-field radiation; output normalization checks at two different stages; global image alignment to quantify rotation, scaling and translation; standard gamma evaluation (3%, 3 mm) and pixel intensity deviation checks including and excluding high dose gradient regions. Tolerances for each test were determined. For algorithm testing, twelve different types of errors were selected to modify the original plan. Corresponding predictions for each test case were generated, which included measurement-based noise. Each test case was run multiple times (with different noise per run) to assess the ability to detect introduced errors. Results: Averaged over five test runs, 99.1% of all plan variations that resulted in patient dose errors were detected within 2° and 100% within 4° (∼1% of patient dose delivery). Including cases that led to slightly modified but clinically equivalent plans, 91.5% were detected by the system within 2°. Based on the type of method that detected the error, determination of error sources was achieved. Conclusion: An EPID-based during-treatment error detection system for VMAT deliveries was successfully designed and tested. The system utilizes a sequence of methods to identify and prevent gross treatment delivery errors. The system was inspected for robustness with realistic noise variations, demonstrating that it has the potential to detect a large majority of errors in real-time and indicate the error

  4. MO-FG-202-07: Real-Time EPID-Based Detection Metric For VMAT Delivery Errors

    Energy Technology Data Exchange (ETDEWEB)

    Passarge, M; Fix, M K; Manser, P [Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Bern (Switzerland); Stampanoni, M F M [Institute for Biomedical Engineering, ETH Zurich, and PSI, Villigen (Switzerland); Siebers, J V [Department of Radiation Oncology, University of Virginia, Charlottesville, VA (United States)

    2016-06-15

    Purpose: To create and test an accurate EPID-frame-based VMAT QA metric to detect gross dose errors in real-time and to provide information about the source of error. Methods: A Swiss cheese model was created for an EPID-based real-time QA process. The system compares a treatmentplan- based reference set of EPID images with images acquired over each 2° gantry angle interval. The metric utilizes a sequence of independent consecutively executed error detection Methods: a masking technique that verifies infield radiation delivery and ensures no out-of-field radiation; output normalization checks at two different stages; global image alignment to quantify rotation, scaling and translation; standard gamma evaluation (3%, 3 mm) and pixel intensity deviation checks including and excluding high dose gradient regions. Tolerances for each test were determined. For algorithm testing, twelve different types of errors were selected to modify the original plan. Corresponding predictions for each test case were generated, which included measurement-based noise. Each test case was run multiple times (with different noise per run) to assess the ability to detect introduced errors. Results: Averaged over five test runs, 99.1% of all plan variations that resulted in patient dose errors were detected within 2° and 100% within 4° (∼1% of patient dose delivery). Including cases that led to slightly modified but clinically equivalent plans, 91.5% were detected by the system within 2°. Based on the type of method that detected the error, determination of error sources was achieved. Conclusion: An EPID-based during-treatment error detection system for VMAT deliveries was successfully designed and tested. The system utilizes a sequence of methods to identify and prevent gross treatment delivery errors. The system was inspected for robustness with realistic noise variations, demonstrating that it has the potential to detect a large majority of errors in real-time and indicate the error

  5. Sample presentation, sources of error and future perspectives on the application of vibrational spectroscopy in the wine industry.

    Science.gov (United States)

    Cozzolino, Daniel

    2015-03-30

    Vibrational spectroscopy encompasses a number of techniques and methods including ultra-violet, visible, Fourier transform infrared or mid infrared, near infrared and Raman spectroscopy. The use and application of spectroscopy generates spectra containing hundreds of variables (absorbances at each wavenumbers or wavelengths), resulting in the production of large data sets representing the chemical and biochemical wine fingerprint. Multivariate data analysis techniques are then required to handle the large amount of data generated in order to interpret the spectra in a meaningful way in order to develop a specific application. This paper focuses on the developments of sample presentation and main sources of error when vibrational spectroscopy methods are applied in wine analysis. Recent and novel applications will be discussed as examples of these developments. © 2014 Society of Chemical Industry.

  6. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    Science.gov (United States)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  7. Understanding Teamwork in Trauma Resuscitation through Analysis of Team Errors

    Science.gov (United States)

    Sarcevic, Aleksandra

    2009-01-01

    An analysis of human errors in complex work settings can lead to important insights into the workspace design. This type of analysis is particularly relevant to safety-critical, socio-technical systems that are highly dynamic, stressful and time-constrained, and where failures can result in catastrophic societal, economic or environmental…

  8. Technical Note: Interference errors in infrared remote sounding of the atmosphere

    Directory of Open Access Journals (Sweden)

    R. Sussmann

    2007-07-01

    Full Text Available Classical error analysis in remote sounding distinguishes between four classes: "smoothing errors," "model parameter errors," "forward model errors," and "retrieval noise errors". For infrared sounding "interference errors", which, in general, cannot be described by these four terms, can be significant. Interference errors originate from spectral residuals due to "interfering species" whose spectral features overlap with the signatures of the target species. A general method for quantification of interference errors is presented, which covers all possible algorithmic implementations, i.e., fine-grid retrievals of the interfering species or coarse-grid retrievals, and cases where the interfering species are not retrieved. In classical retrieval setups interference errors can exceed smoothing errors and can vary by orders of magnitude due to state dependency. An optimum strategy is suggested which practically eliminates interference errors by systematically minimizing the regularization strength applied to joint profile retrieval of the interfering species. This leads to an interfering-species selective deweighting of the retrieval. Details of microwindow selection are no longer critical for this optimum retrieval and widened microwindows even lead to reduced overall (smoothing and interference errors. Since computational power will increase, more and more operational algorithms will be able to utilize this optimum strategy in the future. The findings of this paper can be applied to soundings of all infrared-active atmospheric species, which include more than two dozen different gases relevant to climate and ozone. This holds for all kinds of infrared remote sounding systems, i.e., retrievals from ground-based, balloon-borne, airborne, or satellite spectroradiometers.

  9. Improving Bayesian credibility intervals for classifier error rates using maximum entropy empirical priors.

    Science.gov (United States)

    Gustafsson, Mats G; Wallman, Mikael; Wickenberg Bolin, Ulrika; Göransson, Hanna; Fryknäs, M; Andersson, Claes R; Isaksson, Anders

    2010-06-01

    Successful use of classifiers that learn to make decisions from a set of patient examples require robust methods for performance estimation. Recently many promising approaches for determination of an upper bound for the error rate of a single classifier have been reported but the Bayesian credibility interval (CI) obtained from a conventional holdout test still delivers one of the tightest bounds. The conventional Bayesian CI becomes unacceptably large in real world applications where the test set sizes are less than a few hundred. The source of this problem is that fact that the CI is determined exclusively by the result on the test examples. In other words, there is no information at all provided by the uniform prior density distribution employed which reflects complete lack of prior knowledge about the unknown error rate. Therefore, the aim of the study reported here was to study a maximum entropy (ME) based approach to improved prior knowledge and Bayesian CIs, demonstrating its relevance for biomedical research and clinical practice. It is demonstrated how a refined non-uniform prior density distribution can be obtained by means of the ME principle using empirical results from a few designs and tests using non-overlapping sets of examples. Experimental results show that ME based priors improve the CIs when employed to four quite different simulated and two real world data sets. An empirically derived ME prior seems promising for improving the Bayesian CI for the unknown error rate of a designed classifier. Copyright 2010 Elsevier B.V. All rights reserved.

  10. Estimating model error covariances in nonlinear state-space models using Kalman smoothing and the expectation-maximisation algorithm

    KAUST Repository

    Dreano, Denis

    2017-04-05

    Specification and tuning of errors from dynamical models are important issues in data assimilation. In this work, we propose an iterative expectation-maximisation (EM) algorithm to estimate the model error covariances using classical extended and ensemble versions of the Kalman smoother. We show that, for additive model errors, the estimate of the error covariance converges. We also investigate other forms of model error, such as parametric or multiplicative errors. We show that additive Gaussian model error is able to compensate for non additive sources of error in the algorithms we propose. We also demonstrate the limitations of the extended version of the algorithm and recommend the use of the more robust and flexible ensemble version. This article is a proof of concept of the methodology with the Lorenz-63 attractor. We developed an open-source Python library to enable future users to apply the algorithm to their own nonlinear dynamical models.

  11. Measurement Error in Income and Schooling and the Bias of Linear Estimators

    DEFF Research Database (Denmark)

    Bingley, Paul; Martinello, Alessandro

    2017-01-01

    and Retirement in Europe data with Danish administrative registers. Contrary to most validation studies, we find that measurement error in income is classical once we account for imperfect validation data. We find nonclassical measurement error in schooling, causing a 38% amplification bias in IV estimators......We propose a general framework for determining the extent of measurement error bias in ordinary least squares and instrumental variable (IV) estimators of linear models while allowing for measurement error in the validation source. We apply this method by validating Survey of Health, Ageing...

  12. Self-Interaction Error in Density Functional Theory: An Appraisal.

    Science.gov (United States)

    Bao, Junwei Lucas; Gagliardi, Laura; Truhlar, Donald G

    2018-05-03

    Self-interaction error (SIE) is considered to be one of the major sources of error in most approximate exchange-correlation functionals for Kohn-Sham density-functional theory (KS-DFT), and it is large with all local exchange-correlation functionals and with some hybrid functionals. In this work, we consider systems conventionally considered to be dominated by SIE. For these systems, we demonstrate that by using multiconfiguration pair-density functional theory (MC-PDFT), the error of a translated local density-functional approximation is significantly reduced (by a factor of 3) when using an MCSCF density and on-top density, as compared to using KS-DFT with the parent functional; the error in MC-PDFT with local on-top functionals is even lower than the error in some popular KS-DFT hybrid functionals. Density-functional theory, either in MC-PDFT form with local on-top functionals or in KS-DFT form with some functionals having 50% or more nonlocal exchange, has smaller errors for SIE-prone systems than does CASSCF, which has no SIE.

  13. Error Analysis in a Written Composition Análisis de errores en una composición escrita

    Directory of Open Access Journals (Sweden)

    David Alberto Londoño Vásquez

    2008-12-01

    Full Text Available Learners make errors in both comprehension and production. Some theoreticians have pointed out the difficulty of assigning the cause of failures in comprehension to an inadequate knowledge of a particular syntactic feature of a misunderstood utterance. Indeed, an error can be defined as a deviation from the norms of the target language. In this investigation, based on personal and professional experience, a written composition entitled "My Life in Colombia" will be analyzed based on clinical elicitation (CE research. CE involves getting the informant to produce data of any sort, for example, by means of a general interview or by asking the learner to write a composition. Some errors produced by a foreign language learner in her acquisition process will be analyzed, identifying the possible sources of these errors. Finally, four kinds of errors are classified: omission, addition, misinformation, and misordering.Los aprendices comenten errores tanto en la comprensión como en la producción. Algunos teóricos han identificado que la dificultad para clasificar las diferentes fallas en comprensión se debe al conocimiento inadecuado de una característica sintáctica particular. Por tanto, el error puede definirse como una desviación de las normas del idioma objetivo. En esta experiencia profesional se analizará una composición escrita sobre "Mi vida en Colombia" con base en la investigación a través de la elicitación clínica (EC. Esta se centra en cómo el informante produce datos de cualquier tipo, por ejemplo, a través de una entrevista general o solicitándole al aprendiz una composición escrita. Se analizarán algunos errores producidos por un aprendiz de una lengua extranjera en su proceso de adquisición, identificando sus posibles causas. Finalmente, se clasifican cuatro tipos de errores: omisión, adición, desinformación y yuxtaposición sintáctica.

  14. Learning from errors in radiology to improve patient safety.

    Science.gov (United States)

    Saeed, Shaista Afzal; Masroor, Imrana; Shafqat, Gulnaz

    2013-10-01

    To determine the views and practices of trainees and consultant radiologists about error reporting. Cross-sectional survey. Radiology trainees and consultant radiologists in four tertiary care hospitals in Karachi approached in the second quarter of 2011. Participants were enquired as to their grade, sub-specialty interest, whether they kept a record/log of their errors (defined as a mistake that has management implications for the patient), number of errors they made in the last 12 months and the predominant type of error. They were also asked about the details of their department error meetings. All duly completed questionnaires were included in the study while the ones with incomplete information were excluded. A total of 100 radiologists participated in the survey. Of them, 34 were consultants and 66 were trainees. They had a wide range of sub-specialty interest like CT, Ultrasound, etc. Out of the 100 responders, 49 kept a personal record/log of their errors. In response to the recall of approximate errors they made in the last 12 months, 73 (73%) of participants recorded a varied response with 1 - 5 errors mentioned by majority i.e. 47 (64.5%). Most of the radiologists (97%) claimed receiving information about their errors through multiple sources like morbidity/mortality meetings, patients' follow-up, through colleagues and consultants. Perceptual error 66 (66%) were the predominant error type reported. Regular occurrence of error meetings and attending three or more error meetings in the last 12 months was reported by 35% participants. Majority among these described the atmosphere of these error meetings as informative and comfortable (n = 22, 62.8%). It is of utmost importance to develop a culture of learning from mistakes by conducting error meetings and improving the process of recording and addressing errors to enhance patient safety.

  15. Estimating the Autocorrelated Error Model with Trended Data: Further Results,

    Science.gov (United States)

    1979-11-01

    Perhaps the most serious deficiency of OLS in the presence of autocorrelation is not inefficiency but bias in its estimated standard errors--a bias...k for all t has variance var(b) = o2/ Tk2 2This refutes Maeshiro’s (1976) conjecture that "an estimator utilizing relevant extraneous information

  16. Diagnostic Error in Correctional Mental Health: Prevalence, Causes, and Consequences.

    Science.gov (United States)

    Martin, Michael S; Hynes, Katie; Hatcher, Simon; Colman, Ian

    2016-04-01

    While they have important implications for inmates and resourcing of correctional institutions, diagnostic errors are rarely discussed in correctional mental health research. This review seeks to estimate the prevalence of diagnostic errors in prisons and jails and explores potential causes and consequences. Diagnostic errors are defined as discrepancies in an inmate's diagnostic status depending on who is responsible for conducting the assessment and/or the methods used. It is estimated that at least 10% to 15% of all inmates may be incorrectly classified in terms of the presence or absence of a mental illness. Inmate characteristics, relationships with staff, and cognitive errors stemming from the use of heuristics when faced with time constraints are discussed as possible sources of error. A policy example of screening for mental illness at intake to prison is used to illustrate when the risk of diagnostic error might be increased and to explore strategies to mitigate this risk. © The Author(s) 2016.

  17. Towards a realistic 3D simulation of the extraction region in ITER NBI relevant ion source

    Science.gov (United States)

    Mochalskyy, S.; Wünderlich, D.; Fantz, U.; Franzen, P.; Minea, T.

    2015-03-01

    The development of negative ion (NI) sources for ITER is strongly accompanied by modelling activities. The ONIX code addresses the physics of formation and extraction of negative hydrogen ions at caesiated sources as well as the amount of co-extracted electrons. In order to be closer to the experimental conditions the code has been improved. It includes now the bias potential applied to first grid (plasma grid) of the extraction system, and the presence of Cs+ ions in the plasma. The simulation results show that such aspects play an important role for the formation of an ion-ion plasma in the boundary region by reducing the depth of the negative potential well in vicinity to the plasma grid that limits the extraction of the NIs produced at the Cs covered plasma grid surface. The influence of the initial temperature of the surface produced NI and its emission rate on the NI density in the bulk plasma that in turn affects the beam formation region was analysed. The formation of the plasma meniscus, the boundary between the plasma and the beam, was investigated for the extraction potentials of 5 and 10 kV. At the smaller extraction potential the meniscus moves closer to the plasma grid but as in the case of 10 kV the deepest meniscus bend point is still outside of the aperture. Finally, a plasma containing the same amount of NI and electrons (nH- =ne =1017 m-3) , representing good source conditioning, was simulated. It is shown that at such conditions the extracted NI current can reach values of ˜32 mA cm-2 using ITER-relevant extraction potential of 10 kV and ˜19 mA cm-2 at 5 kV. These results are in good agreement with experimental measurements performed at the small scale ITER prototype source at the test facility BATMAN.

  18. Errors and mistakes in breast ultrasound diagnostics

    Directory of Open Access Journals (Sweden)

    Wiesław Jakubowski

    2012-09-01

    Full Text Available Sonomammography is often the first additional examination performed in the diagnostics of breast diseases. The development of ultrasound imaging techniques, particularly the introduction of high frequency transducers, matrix transducers, harmonic imaging and finally, elastography, influenced the improvement of breast disease diagnostics. Neverthe‑ less, as in each imaging method, there are errors and mistakes resulting from the techni‑ cal limitations of the method, breast anatomy (fibrous remodeling, insufficient sensitivity and, in particular, specificity. Errors in breast ultrasound diagnostics can be divided into impossible to be avoided and potentially possible to be reduced. In this article the most frequently made errors in ultrasound have been presented, including the ones caused by the presence of artifacts resulting from volumetric averaging in the near and far field, artifacts in cysts or in dilated lactiferous ducts (reverberations, comet tail artifacts, lateral beam artifacts, improper setting of general enhancement or time gain curve or range. Errors dependent on the examiner, resulting in the wrong BIRADS‑usg classification, are divided into negative and positive errors. The sources of these errors have been listed. The methods of minimization of the number of errors made have been discussed, includ‑ ing the ones related to the appropriate examination technique, taking into account data from case history and the use of the greatest possible number of additional options such as: harmonic imaging, color and power Doppler and elastography. In the article examples of errors resulting from the technical conditions of the method have been presented, and those dependent on the examiner which are related to the great diversity and variation of ultrasound images of pathological breast lesions.

  19. The effect of a clinical pharmacist-led training programme on intravenous medication errors : a controlled before and after study

    NARCIS (Netherlands)

    Nguyen, Huong; Pham, Hong-Tham; Vo, Dang-Khoa; Nguyen, Tuan-Dung; van den Heuvel, Edwin R.; Haaijer-Ruskamp, Flora M.; Taxis, Katja

    Background Little is known about interventions to reduce intravenous medication administration errors in hospitals, especially in low-and middle-income countries. Objective To assess the effect of a clinical pharmacist-led training programme on clinically relevant errors during intravenous

  20. Assessing Variability and Errors in Historical Runoff Forecasting with Physical Models and Alternative Data Sources

    Science.gov (United States)

    Penn, C. A.; Clow, D. W.; Sexstone, G. A.

    2017-12-01

    Water supply forecasts are an important tool for water resource managers in areas where surface water is relied on for irrigating agricultural lands and for municipal water supplies. Forecast errors, which correspond to inaccurate predictions of total surface water volume, can lead to mis-allocated water and productivity loss, thus costing stakeholders millions of dollars. The objective of this investigation is to provide water resource managers with an improved understanding of factors contributing to forecast error, and to help increase the accuracy of future forecasts. In many watersheds of the western United States, snowmelt contributes 50-75% of annual surface water flow and controls both the timing and volume of peak flow. Water supply forecasts from the Natural Resources Conservation Service (NRCS), National Weather Service, and similar cooperators use precipitation and snowpack measurements to provide water resource managers with an estimate of seasonal runoff volume. The accuracy of these forecasts can be limited by available snowpack and meteorological data. In the headwaters of the Rio Grande, NRCS produces January through June monthly Water Supply Outlook Reports. This study evaluates the accuracy of these forecasts since 1990, and examines what factors may contribute to forecast error. The Rio Grande headwaters has experienced recent changes in land cover from bark beetle infestation and a large wildfire, which can affect hydrological processes within the watershed. To investigate trends and possible contributing factors in forecast error, a semi-distributed hydrological model was calibrated and run to simulate daily streamflow for the period 1990-2015. Annual and seasonal watershed and sub-watershed water balance properties were compared with seasonal water supply forecasts. Gridded meteorological datasets were used to assess changes in the timing and volume of spring precipitation events that may contribute to forecast error. Additionally, a

  1. Grammar Errors in the Writing of Iraqi English Language Learners

    Directory of Open Access Journals (Sweden)

    Yasir Bdaiwi Jasim Al-Shujairi

    2017-10-01

    Full Text Available Several studies have been conducted to investigate the grammatical errors of Iraqi postgraduates and undergraduates in their academic writing. However, few studies have focused on the writing challenges that Iraqi pre-university students face. This research aims at examining the written discourse of Iraqi high school students and the common grammatical errors they make in their writing. The study had a mixed methods design. Through convenience sampling method, 112 compositions were collected from Iraqi pre-university students. For purpose of triangulation, an interview was conducted. The data was analyzed using Corder’s (1967 error analysis model and James’ (1998 framework of grammatical errors. Furthermore, Brown’s (2000 taxonomy was adopted to classify the types of errors. The result showed that Iraqi high school students have serious problems with the usage of verb tenses, articles, and prepositions. Moreover, the most frequent types of errors were Omission and Addition. Furthermore, it was found that intralanguage was the dominant source of errors. These findings may enlighten Iraqi students on the importance of correct grammar use for writing efficacy.

  2. Error estimates for ice discharge calculated using the flux gate approach

    Science.gov (United States)

    Navarro, F. J.; Sánchez Gámez, P.

    2017-12-01

    Ice discharge to the ocean is usually estimated using the flux gate approach, in which ice flux is calculated through predefined flux gates close to the marine glacier front. However, published results usually lack a proper error estimate. In the flux calculation, both errors in cross-sectional area and errors in velocity are relevant. While for estimating the errors in velocity there are well-established procedures, the calculation of the error in the cross-sectional area requires the availability of ground penetrating radar (GPR) profiles transverse to the ice-flow direction. In this contribution, we use IceBridge operation GPR profiles collected in Ellesmere and Devon Islands, Nunavut, Canada, to compare the cross-sectional areas estimated using various approaches with the cross-sections estimated from GPR ice-thickness data. These error estimates are combined with those for ice-velocities calculated from Sentinel-1 SAR data, to get the error in ice discharge. Our preliminary results suggest, regarding area, that the parabolic cross-section approaches perform better than the quartic ones, which tend to overestimate the cross-sectional area for flight lines close to the central flowline. Furthermore, the results show that regional ice-discharge estimates made using parabolic approaches provide reasonable results, but estimates for individual glaciers can have large errors, up to 20% in cross-sectional area.

  3. Understanding error generation in fused deposition modeling

    Science.gov (United States)

    Bochmann, Lennart; Bayley, Cindy; Helu, Moneer; Transchel, Robert; Wegener, Konrad; Dornfeld, David

    2015-03-01

    Additive manufacturing offers completely new possibilities for the manufacturing of parts. The advantages of flexibility and convenience of additive manufacturing have had a significant impact on many industries, and optimizing part quality is crucial for expanding its utilization. This research aims to determine the sources of imprecision in fused deposition modeling (FDM). Process errors in terms of surface quality, accuracy and precision are identified and quantified, and an error-budget approach is used to characterize errors of the machine tool. It was determined that accuracy and precision in the y direction (0.08-0.30 mm) are generally greater than in the x direction (0.12-0.62 mm) and the z direction (0.21-0.57 mm). Furthermore, accuracy and precision tend to decrease at increasing axis positions. The results of this work can be used to identify possible process improvements in the design and control of FDM technology.

  4. The effects of forecast errors on the merchandising of wind power

    International Nuclear Information System (INIS)

    Roon, Serafin von

    2012-01-01

    A permanent balance between consumption and generation is essential for a stable supply of electricity. In order to ensure this balance, all relevant load data have to be announced for the following day. Consequently, a day-ahead forecast of the wind power generation is required, which also forms the basis for the sale of the wind power at the wholesale market. The main subject of the study is the short-term power supply, which compensates errors in wind power forecasting for balancing the wind power forecast errors at short notice. These forecast errors effects the revenues and the expenses by selling and buying power in the day-ahead, intraday and balance energy market. These price effects resulting from the forecast errors are derived from an empirical analysis. In a scenario for the year 2020 the potential of conventional power plants to supply power at short notice is evaluated from a technical and economic point of view by a time series analysis and a unit commitment simulation.

  5. Human errors related to maintenance and modifications

    International Nuclear Information System (INIS)

    Laakso, K.; Pyy, P.; Reiman, L.

    1998-01-01

    about weakness in audits made by the operating organisation and in tests relating to plant operation. The number of plant-specific maintenance records used as input material was high and the findings were discussed thoroughly with the plant maintenance personnel. The results indicated that instrumentation is more prone to human error than the rest of maintenance. Most errors stem from refuelling outage periods and about a half of them were identified during the same outage they were committed. Plant modifications are a significant source of common cause failures. The number of dependent errors could be reduced by improved co-ordination and auditing, post-installation checking, training and start-up testing programmes. (orig.)

  6. Space charge and magnet error simulations for the SNS accumulator ring

    International Nuclear Information System (INIS)

    Beebe-Wang, J.; Fedotov, A.V.; Wei, J.; Machida, S.

    2000-01-01

    The effects of space charge forces and magnet errors in the beam of the Spallation Neutron Source (SNS) accumulator ring are investigated. In this paper, the focus is on the emittance growth and halo/tail formation in the beam due to space charge with and without magnet errors. The beam properties of different particle distributions resulting from various injection painting schemes are investigated. Different working points in the design of SNS accumulator ring lattice are compared. The simulations in close-to-resonance condition in the presence of space charge and magnet errors are presented. (author)

  7. An approach to improving the structure of error-handling code in the linux kernel

    DEFF Research Database (Denmark)

    Saha, Suman; Lawall, Julia; Muller, Gilles

    2011-01-01

    The C language does not provide any abstractions for exception handling or other forms of error handling, leaving programmers to devise their own conventions for detecting and handling errors. The Linux coding style guidelines suggest placing error handling code at the end of each function, where...... an automatic program transformation that transforms error-handling code into this style. We have applied our transformation to the Linux 2.6.34 kernel source code, on which it reorganizes the error handling code of over 1800 functions, in about 25 minutes....

  8. Error analysis of pupils in calculating with fractions

    OpenAIRE

    Uranič, Petra

    2016-01-01

    In this thesis I examine the correlation between the frequency of errors that seventh grade pupils make in their calculations with fractions and their level of understanding of fractions. Fractions are a relevant and demanding theme in the mathematics curriculum. Although we use fractions on a daily basis, pupils find learning fractions to be very difficult. They generally do not struggle with the concept of fractions itself, but they frequently have problems with mathematical operations ...

  9. ERM model analysis for adaptation to hydrological model errors

    Science.gov (United States)

    Baymani-Nezhad, M.; Han, D.

    2018-05-01

    Hydrological conditions are changed continuously and these phenomenons generate errors on flood forecasting models and will lead to get unrealistic results. Therefore, to overcome these difficulties, a concept called model updating is proposed in hydrological studies. Real-time model updating is one of the challenging processes in hydrological sciences and has not been entirely solved due to lack of knowledge about the future state of the catchment under study. Basically, in terms of flood forecasting process, errors propagated from the rainfall-runoff model are enumerated as the main source of uncertainty in the forecasting model. Hence, to dominate the exciting errors, several methods have been proposed by researchers to update the rainfall-runoff models such as parameter updating, model state updating, and correction on input data. The current study focuses on investigations about the ability of rainfall-runoff model parameters to cope with three types of existing errors, timing, shape and volume as the common errors in hydrological modelling. The new lumped model, the ERM model, has been selected for this study to evaluate its parameters for its use in model updating to cope with the stated errors. Investigation about ten events proves that the ERM model parameters can be updated to cope with the errors without the need to recalibrate the model.

  10. Challenge and Error: Critical Events and Attention-Related Errors

    Science.gov (United States)

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  11. Calculating Error Percentage in Using Water Phantom Instead of Soft Tissue Concerning 103Pd Brachytherapy Source Distribution via Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    OL Ahmadi

    2015-12-01

    Full Text Available Introduction: 103Pd is a low energy source, which is used in brachytherapy. According to the standards of American Association of Physicists in Medicine, dosimetric parameters determination of brachytherapy sources before the clinical application was considered significantly important. Therfore, the present study aimed to compare the dosimetric parameters of the target source using the water phantom and soft tissue. Methods: According to the TG-43U1 protocol, the dosimetric parameters were compared around the 103Pd source in regard with water phantom with the density of 0.998 gr/cm3 and the soft tissue with the density of 1.04 gr/cm3 on the longitudinal and transverse axes using the MCNP4C code and the relative differences were compared between the both conditions. Results: The simulation results indicated that the dosimetric parameters depended on the radial dose function and the anisotropy function in the application of the water phantom instead of soft tissue up to a distance of 1.5 cm,  between which a good consistency was observed. With increasing the distance, the difference increased, so as within 6 cm from the source, this difference increased to 4%. Conclusions: The results of  the soft tissue phantom compared with those of the water phantom indicated 4% relative difference at a distance of 6 cm from the source. Therefore, the results of the water phantom with a maximum error of 4% can be used in practical applications instead of soft tissue. Moreover, the amount of differences obtained in each distance regarding using the soft tissue phantom could be corrected.

  12. Error forecasting schemes of error correction at receiver

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2007-08-01

    To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)

  13. Statistical error estimation of the Feynman-α method using the bootstrap method

    International Nuclear Information System (INIS)

    Endo, Tomohiro; Yamamoto, Akio; Yagi, Takahiro; Pyeon, Cheol Ho

    2016-01-01

    Applicability of the bootstrap method is investigated to estimate the statistical error of the Feynman-α method, which is one of the subcritical measurement techniques on the basis of reactor noise analysis. In the Feynman-α method, the statistical error can be simply estimated from multiple measurements of reactor noise, however it requires additional measurement time to repeat the multiple times of measurements. Using a resampling technique called 'bootstrap method' standard deviation and confidence interval of measurement results obtained by the Feynman-α method can be estimated as the statistical error, using only a single measurement of reactor noise. In order to validate our proposed technique, we carried out a passive measurement of reactor noise without any external source, i.e. with only inherent neutron source by spontaneous fission and (α,n) reactions in nuclear fuels at the Kyoto University Criticality Assembly. Through the actual measurement, it is confirmed that the bootstrap method is applicable to approximately estimate the statistical error of measurement results obtained by the Feynman-α method. (author)

  14. Theoretical-and experimental analysis of the errors involved in the wood moisture determination by gamma-ray attenuation

    International Nuclear Information System (INIS)

    Aguiar, O.

    1983-01-01

    The sources of errors in wood moisture determination by gamma-ray attenuation were sought. Equations were proposed for determining errors and for ideal sample thickness. A series of measurements of moisture content in wood samples of Pinus oocarpa was made and the experimental errors were compared with the theoretical errors. (Author) [pt

  15. Intrinsic errors in transporting a single-spin qubit through a double quantum dot

    Science.gov (United States)

    Li, Xiao; Barnes, Edwin; Kestner, J. P.; Das Sarma, S.

    2017-07-01

    Coherent spatial transport or shuttling of a single electron spin through semiconductor nanostructures is an important ingredient in many spintronic and quantum computing applications. In this work we analyze the possible errors in solid-state quantum computation due to leakage in transporting a single-spin qubit through a semiconductor double quantum dot. In particular, we consider three possible sources of leakage errors associated with such transport: finite ramping times, spin-dependent tunneling rates between quantum dots induced by finite spin-orbit couplings, and the presence of multiple valley states. In each case we present quantitative estimates of the leakage errors, and discuss how they can be minimized. The emphasis of this work is on how to deal with the errors intrinsic to the ideal semiconductor structure, such as leakage due to spin-orbit couplings, rather than on errors due to defects or noise sources. In particular, we show that in order to minimize leakage errors induced by spin-dependent tunnelings, it is necessary to apply pulses to perform certain carefully designed spin rotations. We further develop a formalism that allows one to systematically derive constraints on the pulse shapes and present a few examples to highlight the advantage of such an approach.

  16. A national physician survey of diagnostic error in paediatrics.

    Science.gov (United States)

    Perrem, Lucy M; Fanshawe, Thomas R; Sharif, Farhana; Plüddemann, Annette; O'Neill, Michael B

    2016-10-01

    This cross-sectional survey explored paediatric physician perspectives regarding diagnostic errors. All paediatric consultants and specialist registrars in Ireland were invited to participate in this anonymous online survey. The response rate for the study was 54 % (n = 127). Respondents had a median of 9-year clinical experience (interquartile range (IQR) 4-20 years). A diagnostic error was reported at least monthly by 19 (15.0 %) respondents. Consultants reported significantly less diagnostic errors compared to trainees (p value = 0.01). Cognitive error was the top-ranked contributing factor to diagnostic error, with incomplete history and examination considered to be the principal cognitive error. Seeking a second opinion and close follow-up of patients to ensure that the diagnosis is correct were the highest-ranked, clinician-based solutions to diagnostic error. Inadequate staffing levels and excessive workload were the most highly ranked system-related and situational factors. Increased access to and availability of consultants and experts was the most highly ranked system-based solution to diagnostic error. We found a low level of self-perceived diagnostic error in an experienced group of paediatricians, at variance with the literature and warranting further clarification. The results identify perceptions on the major cognitive, system-related and situational factors contributing to diagnostic error and also key preventative strategies. • Diagnostic errors are an important source of preventable patient harm and have an estimated incidence of 10-15 %. • They are multifactorial in origin and include cognitive, system-related and situational factors. What is New: • We identified a low rate of self-perceived diagnostic error in contrast to the existing literature. • Incomplete history and examination, inadequate staffing levels and excessive workload are cited as the principal contributing factors to diagnostic error in this study.

  17. Recognition of medical errors' reporting system dimensions in educational hospitals.

    Science.gov (United States)

    Yarmohammadian, Mohammad H; Mohammadinia, Leila; Tavakoli, Nahid; Ghalriz, Parvin; Haghshenas, Abbas

    2014-01-01

    Nowadays medical errors are one of the serious issues in the health-care system and carry to account of the patient's safety threat. The most important step for achieving safety promotion is identifying errors and their causes in order to recognize, correct and omit them. Concerning about repeating medical errors and harms, which were received via theses errors concluded to designing and establishing medical error reporting systems for hospitals and centers that are presenting therapeutic services. The aim of this study is the recognition of medical errors' reporting system dimensions in educational hospitals. This research is a descriptive-analytical and qualities' study, which has been carried out in Shahid Beheshti educational therapeutic center in Isfahan during 2012. In this study, relevant information was collected through 15 face to face interviews. That each of interviews take place in about 1hr and creation of five focused discussion groups through 45 min for each section, they were composed of Metron, educational supervisor, health officer, health education, and all of the head nurses. Concluded data interviews and discussion sessions were coded, then achieved results were extracted in the presence of clear-sighted persons and after their feedback perception, they were categorized. In order to make sure of information correctness, tables were presented to the research's interviewers and final the corrections were confirmed based on their view. The extracted information from interviews and discussion groups have been divided into nine main categories after content analyzing and subject coding and their subsets have been completely expressed. Achieved dimensions are composed of nine domains of medical error concept, error cases according to nurses' prospection, medical error reporting barriers, employees' motivational factors for error reporting, purposes of medical error reporting system, error reporting's challenges and opportunities, a desired system

  18. Operator errors

    International Nuclear Information System (INIS)

    Knuefer; Lindauer

    1980-01-01

    Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)

  19. Evaluation of soft errors rate in a commercial memory EEPROM

    International Nuclear Information System (INIS)

    Claro, Luiz H.; Silva, A.A.; Santos, Jose A.

    2011-01-01

    Soft errors are transient circuit errors caused by external radiation. When an ion intercepts a p-n region in an electronic component, the ionization produces excess charges along the track. These charges when collected can flip internal values, especially in memory cells. The problem affects not only space application but also terrestrial ones. Neutrons induced by cosmic rays and alpha particles, emitted from traces of radioactive contaminants contained in packaging and chip materials, are the predominant sources of radiation. The soft error susceptibility is different for different memory technology hence the experimental study are very important for Soft Error Rate (SER) evaluation. In this work, the methodology for accelerated tests is presented with the results for SER in a commercial electrically erasable and programmable read-only memory (EEPROM). (author)

  20. Field testing for cosmic ray soft errors in semiconductor memories

    International Nuclear Information System (INIS)

    O'Gorman, T.J.; Ross, J.M.; Taber, A.H.; Ziegler, J.F.; Muhlfeld, H.P.; Montrose, C.J.; Curtis, H.W.; Walsh, J.L.

    1996-01-01

    This paper presents a review of experiments performed by IBM to investigate the causes of soft errors in semiconductor memory chips under field test conditions. The effects of alpha-particles and cosmic rays are separated by comparing multiple measurements of the soft-error rate (SER) of samples of memory chips deep underground and at various altitudes above the earth. The results of case studies on four different memory chips show that cosmic rays are an important source of the ionizing radiation that causes soft errors. The results of field testing are used to confirm the accuracy of the modeling and the accelerated testing of chips

  1. Time Series Analysis of Monte Carlo Fission Sources - I: Dominance Ratio Computation

    International Nuclear Information System (INIS)

    Ueki, Taro; Brown, Forrest B.; Parsons, D. Kent; Warsa, James S.

    2004-01-01

    In the nuclear engineering community, the error propagation of the Monte Carlo fission source distribution through cycles is known to be a linear Markov process when the number of histories per cycle is sufficiently large. In the statistics community, linear Markov processes with linear observation functions are known to have an autoregressive moving average (ARMA) representation of orders p and p - 1. Therefore, one can perform ARMA fitting of the binned Monte Carlo fission source in order to compute physical and statistical quantities relevant to nuclear criticality analysis. In this work, the ARMA fitting of a binary Monte Carlo fission source has been successfully developed as a method to compute the dominance ratio, i.e., the ratio of the second-largest to the largest eigenvalues. The method is free of binning mesh refinement and does not require the alteration of the basic source iteration cycle algorithm. Numerical results are presented for problems with one-group isotropic, two-group linearly anisotropic, and continuous-energy cross sections. Also, a strategy for the analysis of eigenmodes higher than the second-largest eigenvalue is demonstrated numerically

  2. Error analysis in predictive modelling demonstrated on mould data.

    Science.gov (United States)

    Baranyi, József; Csernus, Olívia; Beczner, Judit

    2014-01-17

    The purpose of this paper was to develop a predictive model for the effect of temperature and water activity on the growth rate of Aspergillus niger and to determine the sources of the error when the model is used for prediction. Parallel mould growth curves, derived from the same spore batch, were generated and fitted to determine their growth rate. The variances of replicate ln(growth-rate) estimates were used to quantify the experimental variability, inherent to the method of determining the growth rate. The environmental variability was quantified by the variance of the respective means of replicates. The idea is analogous to the "within group" and "between groups" variability concepts of ANOVA procedures. A (secondary) model, with temperature and water activity as explanatory variables, was fitted to the natural logarithm of the growth rates determined by the primary model. The model error and the experimental and environmental errors were ranked according to their contribution to the total error of prediction. Our method can readily be applied to analysing the error structure of predictive models of bacterial growth models, too. © 2013.

  3. Understanding error generation in fused deposition modeling

    International Nuclear Information System (INIS)

    Bochmann, Lennart; Transchel, Robert; Wegener, Konrad; Bayley, Cindy; Helu, Moneer; Dornfeld, David

    2015-01-01

    Additive manufacturing offers completely new possibilities for the manufacturing of parts. The advantages of flexibility and convenience of additive manufacturing have had a significant impact on many industries, and optimizing part quality is crucial for expanding its utilization. This research aims to determine the sources of imprecision in fused deposition modeling (FDM). Process errors in terms of surface quality, accuracy and precision are identified and quantified, and an error-budget approach is used to characterize errors of the machine tool. It was determined that accuracy and precision in the y direction (0.08–0.30 mm) are generally greater than in the x direction (0.12–0.62 mm) and the z direction (0.21–0.57 mm). Furthermore, accuracy and precision tend to decrease at increasing axis positions. The results of this work can be used to identify possible process improvements in the design and control of FDM technology. (paper)

  4. Error of quantum-logic simulation via vector-soliton collisions

    International Nuclear Information System (INIS)

    Janutka, Andrzej

    2007-01-01

    In a concept of simulating the quantum logic with vector solitons by the author (Janutka 2006 J. Phys. A: Math. Gen. 39 12505), the soliton polarization is thought of as a state vector of a system of cebits (classical counterparts of qubits) switched via collisions with other solitons. The advantage of this method of information processing compared to schemes using linear optics is the possibility of the determination of the information-register state in a single measurement. Minimization of the information-processing error for different optical realizations of the logical systems is studied in the framework of a quantum analysis of soliton fluctuations. The problem is considered with relevance to general difficulties of the quantum error-correction schemes for the classical analogies of the quantum-information processing

  5. Effects of structural error on the estimates of parameters of dynamical systems

    Science.gov (United States)

    Hadaegh, F. Y.; Bekey, G. A.

    1986-01-01

    In this paper, the notion of 'near-equivalence in probability' is introduced for identifying a system in the presence of several error sources. Following some basic definitions, necessary and sufficient conditions for the identifiability of parameters are given. The effects of structural error on the parameter estimates for both the deterministic and stochastic cases are considered.

  6. How Do Simulated Error Experiences Impact Attitudes Related to Error Prevention?

    Science.gov (United States)

    Breitkreuz, Karen R; Dougal, Renae L; Wright, Melanie C

    2016-10-01

    The objective of this project was to determine whether simulated exposure to error situations changes attitudes in a way that may have a positive impact on error prevention behaviors. Using a stratified quasi-randomized experiment design, we compared risk perception attitudes of a control group of nursing students who received standard error education (reviewed medication error content and watched movies about error experiences) to an experimental group of students who reviewed medication error content and participated in simulated error experiences. Dependent measures included perceived memorability of the educational experience, perceived frequency of errors, and perceived caution with respect to preventing errors. Experienced nursing students perceived the simulated error experiences to be more memorable than movies. Less experienced students perceived both simulated error experiences and movies to be highly memorable. After the intervention, compared with movie participants, simulation participants believed errors occurred more frequently. Both types of education increased the participants' intentions to be more cautious and reported caution remained higher than baseline for medication errors 6 months after the intervention. This study provides limited evidence of an advantage of simulation over watching movies describing actual errors with respect to manipulating attitudes related to error prevention. Both interventions resulted in long-term impacts on perceived caution in medication administration. Simulated error experiences made participants more aware of how easily errors can occur, and the movie education made participants more aware of the devastating consequences of errors.

  7. LOWER BOUNDS ON PHOTOMETRIC REDSHIFT ERRORS FROM TYPE Ia SUPERNOVA TEMPLATES

    International Nuclear Information System (INIS)

    Asztalos, S.; Nikolaev, S.; De Vries, W.; Olivier, S.; Cook, K.; Wang, L.

    2010-01-01

    Cosmology with Type Ia supernova heretofore has required extensive spectroscopic follow-up to establish an accurate redshift. Though this resource-intensive approach is tolerable at the present discovery rate, the next generation of ground-based all-sky survey instruments will render it unsustainable. Photometry-based redshift determination may be a viable alternative, though the technique introduces non-negligible errors that ultimately degrade the ability to discriminate between competing cosmologies. We present a strictly template-based photometric redshift estimator and compute redshift reconstruction errors in the presence of statistical errors. Under highly degraded photometric conditions corresponding to a statistical error σ of 0.5, the residual redshift error is found to be 0.236 when assuming a nightly observing cadence and a single Large Synoptic Science Telescope (LSST) u-band filter. Utilizing all six LSST bandpass filters reduces the residual redshift error to 9.1 x 10 -3 . Assuming a more optimistic statistical error σ of 0.05, we derive residual redshift errors of 4.2 x 10 -4 , 5.2 x 10 -4 , 9.2 x 10 -4 , and 1.8 x 10 -3 for observations occuring nightly, every 5th, 20th and 45th night, respectively, in each of the six LSST bandpass filters. Adopting an observing cadence in which photometry is acquired with all six filters every 5th night and a realistic supernova distribution, binned redshift errors are combined with photometric errors with a σ of 0.17 and systematic errors with a σ∼ 0.003 to derive joint errors (σ w , σ w ' ) of (0.012, 0.066), respectively, in (w,w') with 68% confidence using Fisher matrix formalism. Though highly idealized in the present context, the methodology is nonetheless quite relevant for the next generation of ground-based all-sky surveys.

  8. On the Source of the Systematic Errors in the Quatum Mechanical Calculation of the Superheavy Elements

    Directory of Open Access Journals (Sweden)

    Khazan A.

    2010-10-01

    Full Text Available It is shown that only the hyperbolic law of the Periodic Table of Elements allows the exact calculation for the atomic masses. The reference data of Periods 8 and 9 manifest a systematic error in the computer software applied to such a calculation (this systematic error increases with the number of the elements in the Table.

  9. On the Source of the Systematic Errors in the Quantum Mechanical Calculation of the Superheavy Elements

    Directory of Open Access Journals (Sweden)

    Khazan A.

    2010-10-01

    Full Text Available It is shown that only the hyperbolic law of the Periodic Table of Elements allows the exact calculation for the atomic masses. The reference data of Periods 8 and 9 manifest a systematic error in the computer software applied to such a calculation (this systematic error increases with the number of the elements in the Table.

  10. Comprehensive analysis of a medication dosing error related to CPOE.

    Science.gov (United States)

    Horsky, Jan; Kuperman, Gilad J; Patel, Vimla L

    2005-01-01

    This case study of a serious medication error demonstrates the necessity of a comprehensive methodology for the analysis of failures in interaction between humans and information systems. The authors used a novel approach to analyze a dosing error related to computer-based ordering of potassium chloride (KCl). The method included a chronological reconstruction of events and their interdependencies from provider order entry usage logs, semistructured interviews with involved clinicians, and interface usability inspection of the ordering system. Information collected from all sources was compared and evaluated to understand how the error evolved and propagated through the system. In this case, the error was the product of faults in interaction among human and system agents that methods limited in scope to their distinct analytical domains would not identify. The authors characterized errors in several converging aspects of the drug ordering process: confusing on-screen laboratory results review, system usability difficulties, user training problems, and suboptimal clinical system safeguards that all contributed to a serious dosing error. The results of the authors' analysis were used to formulate specific recommendations for interface layout and functionality modifications, suggest new user alerts, propose changes to user training, and address error-prone steps of the KCl ordering process to reduce the risk of future medication dosing errors.

  11. Cognitive and system factors contributing to diagnostic errors in radiology.

    Science.gov (United States)

    Lee, Cindy S; Nagy, Paul G; Weaver, Sallie J; Newman-Toker, David E

    2013-09-01

    In this article, we describe some of the cognitive and system-based sources of detection and interpretation errors in diagnostic radiology and discuss potential approaches to help reduce misdiagnoses. Every radiologist worries about missing a diagnosis or giving a false-positive reading. The retrospective error rate among radiologic examinations is approximately 30%, with real-time errors in daily radiology practice averaging 3-5%. Nearly 75% of all medical malpractice claims against radiologists are related to diagnostic errors. As medical reimbursement trends downward, radiologists attempt to compensate by undertaking additional responsibilities to increase productivity. The increased workload, rising quality expectations, cognitive biases, and poor system factors all contribute to diagnostic errors in radiology. Diagnostic errors are underrecognized and underappreciated in radiology practice. This is due to the inability to obtain reliable national estimates of the impact, the difficulty in evaluating effectiveness of potential interventions, and the poor response to systemwide solutions. Most of our clinical work is executed through type 1 processes to minimize cost, anxiety, and delay; however, type 1 processes are also vulnerable to errors. Instead of trying to completely eliminate cognitive shortcuts that serve us well most of the time, becoming aware of common biases and using metacognitive strategies to mitigate the effects have the potential to create sustainable improvement in diagnostic errors.

  12. Real-time detection and elimination of nonorthogonality error in interference fringe processing

    International Nuclear Information System (INIS)

    Hu Haijiang; Zhang Fengdeng

    2011-01-01

    In the measurement system of interference fringe, the nonorthogonality error is a main error source that influences the precision and accuracy of the measurement system. The detection and elimination of the error has been an important target. A novel method that only uses the cross-zero detection and the counting is proposed to detect and eliminate the nonorthogonality error in real time. This method can be simply realized by means of the digital logic device, because it does not invoke trigonometric functions and inverse trigonometric functions. And it can be widely used in the bidirectional subdivision systems of a Moire fringe and other optical instruments.

  13. Medical error, malpractice and complications: a moral geography.

    Science.gov (United States)

    Zientek, David M

    2010-06-01

    This essay reviews and defines avoidable medical error, malpractice and complication. The relevant ethical principles pertaining to unanticipated medical outcomes are identified. In light of these principles I critically review the moral culpability of the agents in each circumstance and the resulting obligations to patients, their families, and the health care system in general. While I touch on some legal implications, a full discussion of legal obligations and liability issues is beyond the scope of this paper.

  14. Refractive errors and school performance in Brazzaville, Congo ...

    African Journals Online (AJOL)

    Background: Wearing glasses before ten years is becoming more common in developed countries. In black Africa, for cultural or irrational reasons, this attitude remains exceptional. This situation is a source of amblyopia and learning difficulties. Objective: To determine the role of refractive errors in school performance in ...

  15. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  16. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  17. Understanding the nature of errors in nursing: using a model to analyse critical incident reports of errors which had resulted in an adverse or potentially adverse event.

    Science.gov (United States)

    Meurier, C E

    2000-07-01

    Human errors are common in clinical practice, but they are under-reported. As a result, very little is known of the types, antecedents and consequences of errors in nursing practice. This limits the potential to learn from errors and to make improvement in the quality and safety of nursing care. The aim of this study was to use an Organizational Accident Model to analyse critical incidents of errors in nursing. Twenty registered nurses were invited to produce a critical incident report of an error (which had led to an adverse event or potentially could have led to an adverse event) they had made in their professional practice and to write down their responses to the error using a structured format. Using Reason's Organizational Accident Model, supplemental information was then collected from five of the participants by means of an individual in-depth interview to explore further issues relating to the incidents they had reported. The detailed analysis of one of the incidents is discussed in this paper, demonstrating the effectiveness of this approach in providing insight into the chain of events which may lead to an adverse event. The case study approach using critical incidents of clinical errors was shown to provide relevant information regarding the interaction of organizational factors, local circumstances and active failures (errors) in producing an adverse or potentially adverse event. It is suggested that more use should be made of this approach to understand how errors are made in practice and to take appropriate preventative measures.

  18. MODELS OF AIR TRAFFIC CONTROLLERS ERRORS PREVENTION IN TERMINAL CONTROL AREAS UNDER UNCERTAINTY CONDITIONS

    Directory of Open Access Journals (Sweden)

    Volodymyr Kharchenko

    2017-03-01

    Full Text Available Purpose: the aim of this study is to research applied models of air traffic controllers’ errors prevention in terminal control areas (TMA under uncertainty conditions. In this work the theoretical framework descripting safety events and errors of air traffic controllers connected with the operations in TMA is proposed. Methods: optimisation of terminal control area formal description based on the Threat and Error management model and the TMA network model of air traffic flows. Results: the human factors variables associated with safety events in work of air traffic controllers under uncertainty conditions were obtained. The Threat and Error management model application principles to air traffic controller operations and the TMA network model of air traffic flows were proposed. Discussion: Information processing context for preventing air traffic controller errors, examples of threats in work of air traffic controllers, which are relevant for TMA operations under uncertainty conditions.

  19. Towards a realistic 3D simulation of the extraction region in ITER NBI relevant ion source

    International Nuclear Information System (INIS)

    Mochalskyy, S.; Wünderlich, D.; Fantz, U.; Franzen, P.; Minea, T.

    2015-01-01

    The development of negative ion (NI) sources for ITER is strongly accompanied by modelling activities. The ONIX code addresses the physics of formation and extraction of negative hydrogen ions at caesiated sources as well as the amount of co-extracted electrons. In order to be closer to the experimental conditions the code has been improved. It includes now the bias potential applied to first grid (plasma grid) of the extraction system, and the presence of Cs + ions in the plasma. The simulation results show that such aspects play an important role for the formation of an ion–ion plasma in the boundary region by reducing the depth of the negative potential well in vicinity to the plasma grid that limits the extraction of the NIs produced at the Cs covered plasma grid surface. The influence of the initial temperature of the surface produced NI and its emission rate on the NI density in the bulk plasma that in turn affects the beam formation region was analysed. The formation of the plasma meniscus, the boundary between the plasma and the beam, was investigated for the extraction potentials of 5 and 10 kV. At the smaller extraction potential the meniscus moves closer to the plasma grid but as in the case of 10 kV the deepest meniscus bend point is still outside of the aperture. Finally, a plasma containing the same amount of NI and electrons (n H − =n e =10 17 m −3 ), representing good source conditioning, was simulated. It is shown that at such conditions the extracted NI current can reach values of ∼32 mA cm −2 using ITER-relevant extraction potential of 10 kV and ∼19 mA cm −2 at 5 kV. These results are in good agreement with experimental measurements performed at the small scale ITER prototype source at the test facility BATMAN. (paper)

  20. Field error lottery

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  1. Reducing Diagnostic Errors through Effective Communication: Harnessing the Power of Information Technology

    Science.gov (United States)

    Naik, Aanand Dinkar; Rao, Raghuram; Petersen, Laura Ann

    2008-01-01

    Diagnostic errors are poorly understood despite being a frequent cause of medical errors. Recent efforts have aimed to advance the "basic science" of diagnostic error prevention by tracing errors to their most basic origins. Although a refined theory of diagnostic error prevention will take years to formulate, we focus on communication breakdown, a major contributor to diagnostic errors and an increasingly recognized preventable factor in medical mishaps. We describe a comprehensive framework that integrates the potential sources of communication breakdowns within the diagnostic process and identifies vulnerable steps in the diagnostic process where various types of communication breakdowns can precipitate error. We then discuss potential information technology-based interventions that may have efficacy in preventing one or more forms of these breakdowns. These possible intervention strategies include using new technologies to enhance communication between health providers and health systems, improve patient involvement, and facilitate management of information in the medical record. PMID:18373151

  2. Basic Testing of the DUCHAMP Source Finder

    Science.gov (United States)

    Westmeier, T.; Popping, A.; Serra, P.

    2012-01-01

    This paper presents and discusses the results of basic source finding tests in three dimensions (using spectroscopic data cubes) with DUCHAMP, the standard source finder for the Australian Square Kilometre Array Pathfinder. For this purpose, we generated different sets of unresolved and extended Hi model sources. These models were then fed into DUCHAMP, using a range of different parameters and methods provided by the software. The main aim of the tests was to study the performance of DUCHAMP on sources with different parameters and morphologies and assess the accuracy of DUCHAMP's source parametrisation. Overall, we find DUCHAMP to be a powerful source finder capable of reliably detecting sources down to low signal-to-noise ratios and accurately measuring their position and velocity. In the presence of noise in the data, DUCHAMP's measurements of basic source parameters, such as spectral line width and integrated flux, are affected by systematic errors. These errors are a consequence of the effect of noise on the specific algorithms used by DUCHAMP for measuring source parameters in combination with the fact that the software only takes into account pixels above a given flux threshold and hence misses part of the flux. In scientific applications of DUCHAMP these systematic errors would have to be corrected for. Alternatively, DUCHAMP could be used as a source finder only, and source parametrisation could be done in a second step using more sophisticated parametrisation algorithms.

  3. THE RELEVANCE OF GOODWILL REPORTING IN AN ISLAMIC CONTEXT

    Directory of Open Access Journals (Sweden)

    Radu-Daniel LOGHIN

    2014-11-01

    Full Text Available In recent years global finance has seen the emergence of Islamic finance as an alternative to the western secular system. While the two systems posses largely similar concepts of social equity and well-being the major divide between them rests in the distinction between divine and natural law as a source of protection for the downtrodden. As communication barriers between the Arabic and Anglo-European accounting systems start to blur, the question posed for the practitioners as to what constitutes a source of equity becomes more and more relevant. Considering the case of Islamic countries, besides internally-generated and acquired goodwill Islamic sources of social equity such as zakat also provide a source of social equity. For the purpose of this paper, two models pertaining to value relevance are tested for a sample of 56 companies in 6 accounting jurisdictions with the purpose of identifying the underlying sources of social equity revealing that zakat disclosures marginally improve the accuracy of the model.

  4. Errors in clinical laboratories or errors in laboratory medicine?

    Science.gov (United States)

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  5. [Monitoring medication errors in personalised dispensing using the Sentinel Surveillance System method].

    Science.gov (United States)

    Pérez-Cebrián, M; Font-Noguera, I; Doménech-Moral, L; Bosó-Ribelles, V; Romero-Boyero, P; Poveda-Andrés, J L

    2011-01-01

    To assess the efficacy of a new quality control strategy based on daily randomised sampling and monitoring a Sentinel Surveillance System (SSS) medication cart, in order to identify medication errors and their origin at different levels of the process. Prospective quality control study with one year follow-up. A SSS medication cart was randomly selected once a week and double-checked before dispensing medication. Medication errors were recorded before it was taken to the relevant hospital ward. Information concerning complaints after receiving medication and 24-hour monitoring were also noted. Type and origin error data were assessed by a Unit Dose Quality Control Group, which proposed relevant improvement measures. Thirty-four SSS carts were assessed, including 5130 medication lines and 9952 dispensed doses, corresponding to 753 patients. Ninety erroneous lines (1.8%) and 142 mistaken doses (1.4%) were identified at the Pharmacy Department. The most frequent error was dose duplication (38%) and its main cause inappropriate management and forgetfulness (69%). Fifty medication complaints (6.6% of patients) were mainly due to new treatment at admission (52%), and 41 (0.8% of all medication lines), did not completely match the prescription (0.6% lines) as recorded by the Pharmacy Department. Thirty-seven (4.9% of patients) medication complaints due to changes at admission and 32 matching errors (0.6% medication lines) were recorded. The main cause also was inappropriate management and forgetfulness (24%). The simultaneous recording of incidences due to complaints and new medication coincided in 33.3%. In addition, 433 (4.3%) of dispensed doses were returned to the Pharmacy Department. After the Unit Dose Quality Control Group conducted their feedback analysis, 64 improvement measures for Pharmacy Department nurses, 37 for pharmacists, and 24 for the hospital ward were introduced. The SSS programme has proven to be useful as a quality control strategy to identify Unit

  6. Performance Analysis for Bit Error Rate of DS- CDMA Sensor Network Systems with Source Coding

    Directory of Open Access Journals (Sweden)

    Haider M. AlSabbagh

    2012-03-01

    Full Text Available The minimum energy (ME coding combined with DS-CDMA wireless sensor network is analyzed in order to reduce energy consumed and multiple access interference (MAI with related to number of user(receiver. Also, the minimum energy coding which exploits redundant bits for saving power with utilizing RF link and On-Off-Keying modulation. The relations are presented and discussed for several levels of errors expected in the employed channel via amount of bit error rates and amount of the SNR for number of users (receivers.

  7. Measurement error in income and schooling, and the bias of linear estimators

    DEFF Research Database (Denmark)

    Bingley, Paul; Martinello, Alessandro

    The characteristics of measurement error determine the bias of linear estimators. We propose a method for validating economic survey data allowing for measurement error in the validation source, and we apply this method by validating Survey of Health, Ageing and Retirement in Europe (SHARE) data...... with Danish administrative registers. We find that measurement error in surveys is classical for annual gross income but non-classical for years of schooling, causing a 21% amplification bias in IV estimators of returns to schooling. Using a 1958 Danish schooling reform, we contextualize our result...

  8. Performance-based gear metrology kinematic, transmission, error computation and diagnosis

    CERN Document Server

    Mark, William D

    2012-01-01

    A mathematically rigorous explanation of how manufacturing deviations and damage on the working surfaces of gear teeth cause transmission-error contributions to vibration excitations Some gear-tooth working-surface manufacturing deviations of significant amplitude cause negligible vibration excitation and noise, yet others of minuscule amplitude are a source of significant vibration excitation and noise.   Presently available computer-numerically-controlled dedicated gear metrology equipment can measure such error patterns on a gear in a few hours in sufficient detail to enable

  9. Over-Distribution in Source Memory

    Science.gov (United States)

    Brainerd, C. J.; Reyna, V. F.; Holliday, R. E.; Nakamura, K.

    2012-01-01

    Semantic false memories are confounded with a second type of error, over-distribution, in which items are attributed to contradictory episodic states. Over-distribution errors have proved to be more common than false memories when the two are disentangled. We investigated whether over-distribution is prevalent in another classic false memory paradigm: source monitoring. It is. Conventional false memory responses (source misattributions) were predominantly over-distribution errors, but unlike semantic false memory, over-distribution also accounted for more than half of true memory responses (correct source attributions). Experimental control of over-distribution was achieved via a series of manipulations that affected either recollection of contextual details or item memory (concreteness, frequency, list-order, number of presentation contexts, and individual differences in verbatim memory). A theoretical model was used to analyze the data (conjoint process dissociation) that predicts that predicts that (a) over-distribution is directly proportional to item memory but inversely proportional to recollection and (b) item memory is not a necessary precondition for recollection of contextual details. The results were consistent with both predictions. PMID:21942494

  10. ERF/ERFC, Calculation of Error Function, Complementary Error Function, Probability Integrals

    International Nuclear Information System (INIS)

    Vogel, J.E.

    1983-01-01

    1 - Description of problem or function: ERF and ERFC are used to compute values of the error function and complementary error function for any real number. They may be used to compute other related functions such as the normal probability integrals. 4. Method of solution: The error function and complementary error function are approximated by rational functions. Three such rational approximations are used depending on whether - x .GE.4.0. In the first region the error function is computed directly and the complementary error function is computed via the identity erfc(x)=1.0-erf(x). In the other two regions the complementary error function is computed directly and the error function is computed from the identity erf(x)=1.0-erfc(x). The error function and complementary error function are real-valued functions of any real argument. The range of the error function is (-1,1). The range of the complementary error function is (0,2). 5. Restrictions on the complexity of the problem: The user is cautioned against using ERF to compute the complementary error function by using the identity erfc(x)=1.0-erf(x). This subtraction may cause partial or total loss of significance for certain values of x

  11. Modeling systematic errors: polychromatic sources of Beer-Lambert deviations in HPLC/UV and nonchromatographic spectrophotometric assays.

    Science.gov (United States)

    Galli, C

    2001-07-01

    It is well established that the use of polychromatic radiation in spectrophotometric assays leads to excursions from the Beer-Lambert limit. This Note models the resulting systematic error as a function of assay spectral width, slope of molecular extinction coefficient, and analyte concentration. The theoretical calculations are compared with recent experimental results; a parameter is introduced which can be used to estimate the magnitude of the systematic error in both chromatographic and nonchromatographic spectrophotometric assays. It is important to realize that the polychromatic radiation employed in common laboratory equipment can yield assay errors up to approximately 4%, even at absorption levels generally considered 'safe' (i.e. absorption <1). Thus careful consideration of instrumental spectral width, analyte concentration, and slope of molecular extinction coefficient is required to ensure robust analytical methods.

  12. Error sources in the real-time NLDAS incident surface solar radiation and an evaluation against field observations and the NARR

    Science.gov (United States)

    Park, G.; Gao, X.; Sorooshian, S.

    2005-12-01

    The atmospheric model is sensitive to the land surface interactions and its coupling with Land surface Models (LSMs) leads to a better ability to forecast weather under extreme climate conditions, such as droughts and floods (Atlas et al. 1993; Beljaars et al. 1996). However, it is still questionable how accurately the surface exchanges can be simulated using LSMs, since terrestrial properties and processes have high variability and heterogeneity. Examinations with long-term and multi-site surface observations including both remotely sensed and ground observations are highly needed to make an objective evaluation on the effectiveness and uncertainty of LSMs at different circumstances. Among several atmospheric forcing required for the offline simulation of LSMs, incident surface solar radiation is one of the most significant components, since it plays a major role in total incoming energy into the land surface. The North American Land Data Assimilation System (NLDAS) and North American Regional Reanalysis (NARR) are two important data sources providing high-resolution surface solar radiation data for the use of research communities. In this study, these data are evaluated against field observations (AmeriFlux) to identify their advantages, deficiencies and sources of errors. The NLDAS incident solar radiation shows a pretty good agreement in monthly mean prior to the summer of 2001, while it overestimates after the summer of 2001 and its bias is pretty close to the EDAS. Two main error sources are identified: 1) GOES solar radiation was not used in the NLDAS for several months in 2001 and 2003, and 2) GOES incident solar radiation when available, was positively biased in year 2002. The known snow detection problem is sometimes identified in the NLDAS, since it is inherited from GOES incident solar radiation. The NARR consistently overestimates incident surface solar radiation, which might produce erroneous outputs if used in the LSMs. Further attention is given to

  13. An Empirical State Error Covariance Matrix for Batch State Estimation

    Science.gov (United States)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the

  14. The effect of subject measurement error on joint kinematics in the conventional gait model: Insights from the open-source pyCGM tool using high performance computing methods.

    Science.gov (United States)

    Schwartz, Mathew; Dixon, Philippe C

    2018-01-01

    The conventional gait model (CGM) is a widely used biomechanical model which has been validated over many years. The CGM relies on retro-reflective markers placed along anatomical landmarks, a static calibration pose, and subject measurements as inputs for joint angle calculations. While past literature has shown the possible errors caused by improper marker placement, studies on the effects of inaccurate subject measurements are lacking. Moreover, as many laboratories rely on the commercial version of the CGM, released as the Plug-in Gait (Vicon Motion Systems Ltd, Oxford, UK), integrating improvements into the CGM code is not easily accomplished. This paper introduces a Python implementation for the CGM, referred to as pyCGM, which is an open-source, easily modifiable, cross platform, and high performance computational implementation. The aims of pyCGM are to (1) reproduce joint kinematic outputs from the Vicon CGM and (2) be implemented in a parallel approach to allow integration on a high performance computer. The aims of this paper are to (1) demonstrate that pyCGM can systematically and efficiently examine the effect of subject measurements on joint angles and (2) be updated to include new calculation methods suggested in the literature. The results show that the calculated joint angles from pyCGM agree with Vicon CGM outputs, with a maximum lower body joint angle difference of less than 10-5 degrees. Through the hierarchical system, the ankle joint is the most vulnerable to subject measurement error. Leg length has the greatest effect on all joints as a percentage of measurement error. When compared to the errors previously found through inter-laboratory measurements, the impact of subject measurements is minimal, and researchers should rather focus on marker placement. Finally, we showed that code modifications can be performed to include improved hip, knee, and ankle joint centre estimations suggested in the existing literature. The pyCGM code is provided

  15. Source credibility and idea improvement have independent effects on unconscious plagiarism errors in recall and generate-new tasks.

    Science.gov (United States)

    Perfect, Timothy J; Field, Ian; Jones, Robert

    2009-01-01

    Unconscious plagiarism occurs when people try to generate new ideas or when they try to recall their own ideas from among a set generated by a group. In this study, the factors that independently influence these two forms of plagiarism error were examined. Participants initially generated solutions to real-world problems in 2 domains of knowledge in collaboration with a confederate presented as an expert in 1 domain. Subsequently, the participant generated improvements to half of the ideas from each person. Participants returned 1 day later to recall either their own ideas or their partner's ideas and to complete a generate-new task. A double dissociation was observed. Generate-new plagiarism was driven by partner expertise but not by idea improvement, whereas recall plagiarism was driven by improvement but not expertise. This improvement effect on recall plagiarism was seen for the recall-own but not the recall-partner task, suggesting that the increase in recall-own plagiarism is due to mistaken idea ownership, not source confusion.

  16. Refractive optics to compensate x-ray mirror shape-errors

    Science.gov (United States)

    Laundy, David; Sawhney, Kawal; Dhamgaye, Vishal; Pape, Ian

    2017-08-01

    Elliptically profiled mirrors operating at glancing angle are frequently used at X-ray synchrotron sources to focus X-rays into sub-micrometer sized spots. Mirror figure error, defined as the height difference function between the actual mirror surface and the ideal elliptical profile, causes a perturbation of the X-ray wavefront for X- rays reflecting from the mirror. This perturbation, when propagated to the focal plane results in an increase in the size of the focused beam. At Diamond Light Source we are developing refractive optics that can be used to locally cancel out the wavefront distortion caused by figure error from nano-focusing elliptical mirrors. These optics could be used to correct existing optical components on synchrotron radiation beamlines in order to give focused X-ray beam sizes approaching the theoretical diffraction limit. We present our latest results showing measurement of the X-ray wavefront error after reflection from X-ray mirrors and the translation of the measured wavefront into a design for refractive optical elements for correction of the X-ray wavefront. We show measurement of the focused beam with and without the corrective optics inserted showing reduction in the size of the focus resulting from the correction to the wavefront.

  17. Detecting Soft Errors in Stencil based Computations

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, V. [Univ. of Utah, Salt Lake City, UT (United States); Gopalkrishnan, G. [Univ. of Utah, Salt Lake City, UT (United States); Bronevetsky, G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-05-06

    Given the growing emphasis on system resilience, it is important to develop software-level error detectors that help trap hardware-level faults with reasonable accuracy while minimizing false alarms as well as the performance overhead introduced. We present a technique that approaches this idea by taking stencil computations as our target, and synthesizing detectors based on machine learning. In particular, we employ linear regression to generate computationally inexpensive models which form the basis for error detection. Our technique has been incorporated into a new open-source library called SORREL. In addition to reporting encouraging experimental results, we demonstrate techniques that help reduce the size of training data. We also discuss the efficacy of various detectors synthesized, as well as our future plans.

  18. An efficient CDMA decoder for correlated information sources

    International Nuclear Information System (INIS)

    Efraim, Hadar; Yacov, Nadav; Kanter, Ido; Shental, Ori

    2009-01-01

    We consider the detection of correlated information sources in the ubiquitous code-division multiple-access (CDMA) scheme. We propose a message-passing based scheme for detecting correlated sources directly, with no need for source coding. The detection is done simultaneously over a block of transmitted binary symbols (word). Simulation results are provided, demonstrating a substantial improvement in bit error rate in comparison with the unmodified detector and the alternative of source compression. The robustness of the error-performance improvement is shown under practical model settings, including wrong estimation of the generating Markov transition matrix and finite-length spreading codes

  19. Error Analysis of Variations on Larsen's Benchmark Problem

    International Nuclear Information System (INIS)

    Azmy, YY

    2001-01-01

    Error norms for three variants of Larsen's benchmark problem are evaluated using three numerical methods for solving the discrete ordinates approximation of the neutron transport equation in multidimensional Cartesian geometry. The three variants of Larsen's test problem are concerned with the incoming flux boundary conditions: unit incoming flux on the left and bottom edges (Larsen's configuration); unit, incoming flux only on the left edge; unit incoming flux only on the bottom edge. The three methods considered are the Diamond Difference (DD) method, and the constant-approximation versions of the Arbitrarily High Order Transport method of the Nodal type (AHOT-N), and of the Characteristic (AHOT-C) type. The cell-wise error is computed as the difference between the cell-averaged flux computed by each method and the exact value, then the L 1 , L 2 , and L ∞ error norms are calculated. The results of this study demonstrate that while integral error norms, i.e. L 1 , L 2 , converge to zero with mesh refinement, the pointwise L ∞ norm does not due to solution discontinuity across the singular characteristic. Little difference is observed between the error norm behavior of the three methods considered in spite of the fact that AHOT-C is locally exact, suggesting that numerical diffusion across the singular characteristic as the major source of error on the global scale. However, AHOT-C possesses a given accuracy in a larger fraction of computational cells than DD

  20. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  1. Sources of Phoneme Errors in Repetition: Perseverative, Neologistic, and Lesion Patterns in Jargon Aphasia

    Directory of Open Access Journals (Sweden)

    Emma Pilkington

    2017-05-01

    Full Text Available This study examined patterns of neologistic and perseverative errors during word repetition in fluent Jargon aphasia. The principal hypotheses accounting for Jargon production indicate that poor activation of a target stimulus leads to weakly activated target phoneme segments, which are outcompeted at the phonological encoding level. Voxel-lesion symptom mapping studies of word repetition errors suggest a breakdown in the translation from auditory-phonological analysis to motor activation. Behavioral analyses of repetition data were used to analyse the target relatedness (Phonological Overlap Index: POI of neologistic errors and patterns of perseveration in 25 individuals with Jargon aphasia. Lesion-symptom analyses explored the relationship between neurological damage and jargon repetition in a group of 38 aphasia participants. Behavioral results showed that neologisms produced by 23 jargon individuals contained greater degrees of target lexico-phonological information than predicted by chance and that neologistic and perseverative production were closely associated. A significant relationship between jargon production and lesions to temporoparietal regions was identified. Region of interest regression analyses suggested that damage to the posterior superior temporal gyrus and superior temporal sulcus in combination was best predictive of a Jargon aphasia profile. Taken together, these results suggest that poor phonological encoding, secondary to impairment in sensory-motor integration, alongside impairments in self-monitoring result in jargon repetition. Insights for clinical management and future directions are discussed.

  2. Review of current GPS methodologies for producing accurate time series and their error sources

    Science.gov (United States)

    He, Xiaoxing; Montillet, Jean-Philippe; Fernandes, Rui; Bos, Machiel; Yu, Kegen; Hua, Xianghong; Jiang, Weiping

    2017-05-01

    The Global Positioning System (GPS) is an important tool to observe and model geodynamic processes such as plate tectonics and post-glacial rebound. In the last three decades, GPS has seen tremendous advances in the precision of the measurements, which allow researchers to study geophysical signals through a careful analysis of daily time series of GPS receiver coordinates. However, the GPS observations contain errors and the time series can be described as the sum of a real signal and noise. The signal itself can again be divided into station displacements due to geophysical causes and to disturbing factors. Examples of the latter are errors in the realization and stability of the reference frame and corrections due to ionospheric and tropospheric delays and GPS satellite orbit errors. There is an increasing demand on detecting millimeter to sub-millimeter level ground displacement signals in order to further understand regional scale geodetic phenomena hence requiring further improvements in the sensitivity of the GPS solutions. This paper provides a review spanning over 25 years of advances in processing strategies, error mitigation methods and noise modeling for the processing and analysis of GPS daily position time series. The processing of the observations is described step-by-step and mainly with three different strategies in order to explain the weaknesses and strengths of the existing methodologies. In particular, we focus on the choice of the stochastic model in the GPS time series, which directly affects the estimation of the functional model including, for example, tectonic rates, seasonal signals and co-seismic offsets. Moreover, the geodetic community continues to develop computational methods to fully automatize all phases from analysis of GPS time series. This idea is greatly motivated by the large number of GPS receivers installed around the world for diverse applications ranging from surveying small deformations of civil engineering structures (e

  3. Nuclear weapon relevant materials and preventive arms control. Uranium-free fuels for plutonium elimination and spallation neutron sources

    International Nuclear Information System (INIS)

    Liebert, Wolfgang; Englert, Matthias; Pistner, Christoph

    2009-01-01

    technological challenges of nuclear non-proliferation, which are directly connected with the central role of weapon-relevant materials, and it is trying to present practical solutions on a technical basis: - Discover paths for the disposal of existing amounts of nuclear weapon-relevant materials elaborating on the example of technically-based plutonium disposal options: central technical questions of the possible use of uranium-free inert matrix fuel (IMF) in currently used light water reactors will be addressed in order to clarify which advantages or disadvantages do exist in comparison to other disposal options. The investigation is limited on the comparison with one other reactor-based option, the use of uranium-plutonium mixed-oxide (MOX) fuels. - Analysis of proliferation relevant potentials of new nuclear technologies (accessibility of weapon materials): Exemplary investigation of spallation neutron sources in order to improve this technology by a more proliferation resistant shaping. Although they are obviously capable to breed nuclear weapon-relevant materials like plutonium, uranium-233 or tritium, there is no comprehensive analysis of nonproliferation aspects of spallation neutron sources up to now. Both project parts provide not only contributions to the concept of preventive arms control but also to the shaping of technologies, which is oriented towards the criteria of proliferation resistance.

  4. [Evaluation of administration errors of injectable drugs in neonatology].

    Science.gov (United States)

    Cherif, A; Sayadi, M; Ben Hmida, H; Ben Ameur, K; Mestiri, K

    2015-11-01

    Use of injectable drugs in newborns represents more than 90% of prescriptions and requires special precautions in order to ensure more safety and efficiency. The aim of this study is to gather errors relating to the administration of injectable drugs and to suggest corrective actions. This descriptive and transversal study has evaluated 300 injectable drug administrations in a neonatology unit. Two hundred and sixty-one administrations have contained an error. Data are collected by direct observations of administrative act. Errors observed are: an inappropriate mixture (2.6% of cases); an incorrect delivery rate (33.7% of cases); incorrect dilutions (26.7% of cases); error in calculation of the dose to be injected (16.7% of cases); error while sampling small volumes (6.3% of cases); error or omission of administration schedule (1% of cases). These data have enabled us to evaluate administration of injectable drugs in neonatology. Different types of errors observed could be a source of therapeutic inefficiency, extended lengths of stay or iatrogenic drug. Following these observations, corrective actions have been undertaken by pharmacists and consist of: organizing training sessions for nursing; developing an explanatory guide for dilution and administration of injectable medicines, which was made available to the clinical service. Collaborative strategies doctor-nurse-pharmacist can help to reduce errors in the medication process especially during his administration. It permits improvement of injectable drugs use, offering more security and better efficiency and contribute to guarantee ideal therapy for patients. Copyright © 2015. Published by Elsevier Masson SAS.

  5. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  6. Water flux in animals: analysis of potential errors in the tritiated water method

    International Nuclear Information System (INIS)

    Nagy, K.A.; Costa, D.

    1979-03-01

    Laboratory studies indicate that tritiated water measurements of water flux are accurate to within -7 to +4% in mammals, but errors are larger in some reptiles. However, under conditions that can occur in field studies, errors may be much greater. Influx of environmental water vapor via lungs and skin can cause errors exceeding +-50% in some circumstances. If water flux rates in an animal vary through time, errors approach +-15% in extreme situations, but are near +-3% in more typical circumstances. Errors due to fractional evaporation of tritiated water may approach -9%. This error probably varies between species. Use of an inappropriate equation for calculating water flux from isotope data can cause errors exceeding +-100%. The following sources of error are either negligible or avoidable: use of isotope dilution space as a measure of body water volume, loss of nonaqueous tritium bound to excreta, binding of tritium with nonaqueous substances in the body, radiation toxicity effects, and small analytical errors in isotope measurements. Water flux rates measured with tritiated water should be within +-10% of actual flux rates in most situations

  7. Water flux in animals: analysis of potential errors in the tritiated water method

    Energy Technology Data Exchange (ETDEWEB)

    Nagy, K.A.; Costa, D.

    1979-03-01

    Laboratory studies indicate that tritiated water measurements of water flux are accurate to within -7 to +4% in mammals, but errors are larger in some reptiles. However, under conditions that can occur in field studies, errors may be much greater. Influx of environmental water vapor via lungs and skin can cause errors exceeding +-50% in some circumstances. If water flux rates in an animal vary through time, errors approach +-15% in extreme situations, but are near +-3% in more typical circumstances. Errors due to fractional evaporation of tritiated water may approach -9%. This error probably varies between species. Use of an inappropriate equation for calculating water flux from isotope data can cause errors exceeding +-100%. The following sources of error are either negligible or avoidable: use of isotope dilution space as a measure of body water volume, loss of nonaqueous tritium bound to excreta, binding of tritium with nonaqueous substances in the body, radiation toxicity effects, and small analytical errors in isotope measurements. Water flux rates measured with tritiated water should be within +-10% of actual flux rates in most situations.

  8. Error Patterns

    NARCIS (Netherlands)

    Hoede, C.; Li, Z.

    2001-01-01

    In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,

  9. Hessian matrix approach for determining error field sensitivity to coil deviations

    Science.gov (United States)

    Zhu, Caoxiang; Hudson, Stuart R.; Lazerson, Samuel A.; Song, Yuntao; Wan, Yuanxi

    2018-05-01

    The presence of error fields has been shown to degrade plasma confinement and drive instabilities. Error fields can arise from many sources, but are predominantly attributed to deviations in the coil geometry. In this paper, we introduce a Hessian matrix approach for determining error field sensitivity to coil deviations. A primary cost function used for designing stellarator coils, the surface integral of normalized normal field errors, was adopted to evaluate the deviation of the generated magnetic field from the desired magnetic field. The FOCUS code (Zhu et al 2018 Nucl. Fusion 58 016008) is utilized to provide fast and accurate calculations of the Hessian. The sensitivities of error fields to coil displacements are then determined by the eigenvalues of the Hessian matrix. A proof-of-principle example is given on a CNT-like configuration. We anticipate that this new method could provide information to avoid dominant coil misalignments and simplify coil designs for stellarators.

  10. Estimating Classification Errors under Edit Restrictions in Composite Survey-Register Data Using Multiple Imputation Latent Class Modelling (MILC)

    NARCIS (Netherlands)

    Boeschoten, Laura; Oberski, Daniel; De Waal, Ton

    2017-01-01

    Both registers and surveys can contain classification errors. These errors can be estimated by making use of a composite data set. We propose a new method based on latent class modelling to estimate the number of classification errors across several sources while taking into account impossible

  11. Development of an Experimental Measurement System for Human Error Characteristics and a Pilot Test

    International Nuclear Information System (INIS)

    Jang, Tong-Il; Lee, Hyun-Chul; Moon, Kwangsu

    2017-01-01

    Some items out of individual and team characteristics were partially selected, and a pilot test was performed to measure and evaluate them using the experimental measurement system of human error characteristics. It is one of the processes to produce input data to the Eco-DBMS. And also, through the pilot test, it was tried to take methods to measure and acquire the physiological data, and to develop data format and quantification methods for the database. In this study, a pilot test to measure the stress and the tension level, and team cognitive characteristics out of human error characteristics was performed using the human error characteristics measurement and experimental evaluation system. In an experiment measuring the stress level, physiological characteristics using EEG was measured in a simulated unexpected situation. As shown in results, although this experiment was pilot, it was validated that relevant results for evaluating human error coping effects of workers’ FFD management guidelines and unexpected situation against guidelines can be obtained. In following researches, additional experiments including other human error characteristics will be conducted. Furthermore, the human error characteristics measurement and experimental evaluation system will be utilized to validate various human error coping solutions such as human factors criteria, design, and guidelines as well as supplement the human error characteristics database.

  12. Stereochemical errors and their implications for molecular dynamics simulations

    Directory of Open Access Journals (Sweden)

    Freddolino Peter L

    2011-05-01

    Full Text Available Abstract Background Biological molecules are often asymmetric with respect to stereochemistry, and correct stereochemistry is essential to their function. Molecular dynamics simulations of biomolecules have increasingly become an integral part of biophysical research. However, stereochemical errors in biomolecular structures can have a dramatic impact on the results of simulations. Results Here we illustrate the effects that chirality and peptide bond configuration flips may have on the secondary structure of proteins throughout a simulation. We also analyze the most common sources of stereochemical errors in biomolecular structures and present software tools to identify, correct, and prevent stereochemical errors in molecular dynamics simulations of biomolecules. Conclusions Use of the tools presented here should become a standard step in the preparation of biomolecular simulations and in the generation of predicted structural models for proteins and nucleic acids.

  13. The error in total error reduction.

    Science.gov (United States)

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2014-02-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Uncertainty quantification for radiation measurements: Bottom-up error variance estimation using calibration information

    International Nuclear Information System (INIS)

    Burr, T.; Croft, S.; Krieger, T.; Martin, K.; Norman, C.; Walsh, S.

    2016-01-01

    One example of top-down uncertainty quantification (UQ) involves comparing two or more measurements on each of multiple items. One example of bottom-up UQ expresses a measurement result as a function of one or more input variables that have associated errors, such as a measured count rate, which individually (or collectively) can be evaluated for impact on the uncertainty in the resulting measured value. In practice, it is often found that top-down UQ exhibits larger error variances than bottom-up UQ, because some error sources are present in the fielded assay methods used in top-down UQ that are not present (or not recognized) in the assay studies used in bottom-up UQ. One would like better consistency between the two approaches in order to claim understanding of the measurement process. The purpose of this paper is to refine bottom-up uncertainty estimation by using calibration information so that if there are no unknown error sources, the refined bottom-up uncertainty estimate will agree with the top-down uncertainty estimate to within a specified tolerance. Then, in practice, if the top-down uncertainty estimate is larger than the refined bottom-up uncertainty estimate by more than the specified tolerance, there must be omitted sources of error beyond those predicted from calibration uncertainty. The paper develops a refined bottom-up uncertainty approach for four cases of simple linear calibration: (1) inverse regression with negligible error in predictors, (2) inverse regression with non-negligible error in predictors, (3) classical regression followed by inversion with negligible error in predictors, and (4) classical regression followed by inversion with non-negligible errors in predictors. Our illustrations are of general interest, but are drawn from our experience with nuclear material assay by non-destructive assay. The main example we use is gamma spectroscopy that applies the enrichment meter principle. Previous papers that ignore error in predictors

  15. Multipole error analysis using local 3-bump orbit data in Fermilab Recycler

    International Nuclear Information System (INIS)

    Yang, M.J.; Xiao, M.

    2005-01-01

    The magnetic harmonic errors of the Fermilab Recycler ring were examined using circulating beam data taken with closed local orbit bumps. Data was first parsed into harmonic orbits of first, second, and third order. Each of which was analyzed for sources of magnetic errors of corresponding order. This study was made possible only with the incredible resolution of a new BPM system that was commissioned after June of 2003

  16. Saccades to remembered target locations: an analysis of systematic and variable errors.

    Science.gov (United States)

    White, J M; Sparks, D L; Stanford, T R

    1994-01-01

    We studied the effects of varying delay interval on the accuracy and velocity of saccades to the remembered locations of visual targets. Remembered saccades were less accurate than control saccades. Both systematic and variable errors contributed to the loss of accuracy. Systematic errors were similar in size for delay intervals ranging from 400 msec to 5.6 sec, but variable errors increased monotonically as delay intervals were lengthened. Compared to control saccades, remembered saccades were slower and the peak velocities were more variable. However, neither peak velocity nor variability in peak velocity was related to the duration of the delay interval. Our findings indicate that a memory-related process is not the major source of the systematic errors observed on memory trials.

  17. Proceedings of the workshop on ion source issues relevant to a pulsed spallation neutron source: Part 1: Workshop summary

    International Nuclear Information System (INIS)

    Schroeder, L.; Leung, K.N.; Alonso, J.

    1994-10-01

    The workshop reviewed the ion-source requirements for high-power accelerator-driven spallation neutron facilities, and the performance of existing ion sources. Proposals for new facilities in the 1- to 5-MW range call for a widely differing set of ion-source requirements. For example, the source peak current requirements vary from 40 mA to 150 mA, while the duty factor ranges from 1% to 9%. Much of the workshop discussion centered on the state-of-the-art of negative hydrogen ion source (H - ) technology and the present experience with Penning and volume sources. In addition, other ion source technologies, for positive ions or CW applications were reviewed. Some of these sources have been operational at existing accelerator complexes and some are in the source-development stage on test stands

  18. Characteristics of pediatric chemotherapy medication errors in a national error reporting database.

    Science.gov (United States)

    Rinke, Michael L; Shore, Andrew D; Morlock, Laura; Hicks, Rodney W; Miller, Marlene R

    2007-07-01

    Little is known regarding chemotherapy medication errors in pediatrics despite studies suggesting high rates of overall pediatric medication errors. In this study, the authors examined patterns in pediatric chemotherapy errors. The authors queried the United States Pharmacopeia MEDMARX database, a national, voluntary, Internet-accessible error reporting system, for all error reports from 1999 through 2004 that involved chemotherapy medications and patients aged error reports, 85% reached the patient, and 15.6% required additional patient monitoring or therapeutic intervention. Forty-eight percent of errors originated in the administering phase of medication delivery, and 30% originated in the drug-dispensing phase. Of the 387 medications cited, 39.5% were antimetabolites, 14.0% were alkylating agents, 9.3% were anthracyclines, and 9.3% were topoisomerase inhibitors. The most commonly involved chemotherapeutic agents were methotrexate (15.3%), cytarabine (12.1%), and etoposide (8.3%). The most common error types were improper dose/quantity (22.9% of 327 cited error types), wrong time (22.6%), omission error (14.1%), and wrong administration technique/wrong route (12.2%). The most common error causes were performance deficit (41.3% of 547 cited error causes), equipment and medication delivery devices (12.4%), communication (8.8%), knowledge deficit (6.8%), and written order errors (5.5%). Four of the 5 most serious errors occurred at community hospitals. Pediatric chemotherapy errors often reached the patient, potentially were harmful, and differed in quality between outpatient and inpatient areas. This study indicated which chemotherapeutic agents most often were involved in errors and that administering errors were common. Investigation is needed regarding targeted medication administration safeguards for these high-risk medications. Copyright (c) 2007 American Cancer Society.

  19. Effects of OCR Errors on Ranking and Feedback Using the Vector Space Model.

    Science.gov (United States)

    Taghva, Kazem; And Others

    1996-01-01

    Reports on the performance of the vector space model in the presence of OCR (optical character recognition) errors in information retrieval. Highlights include precision and recall, a full-text test collection, smart vector representation, impact of weighting parameters, ranking variability, and the effect of relevance feedback. (Author/LRW)

  20. Causes of medication administration errors in hospitals: a systematic review of quantitative and qualitative evidence.

    Science.gov (United States)

    Keers, Richard N; Williams, Steven D; Cooke, Jonathan; Ashcroft, Darren M

    2013-11-01

    Underlying systems factors have been seen to be crucial contributors to the occurrence of medication errors. By understanding the causes of these errors, the most appropriate interventions can be designed and implemented to minimise their occurrence. This study aimed to systematically review and appraise empirical evidence relating to the causes of medication administration errors (MAEs) in hospital settings. Nine electronic databases (MEDLINE, EMBASE, International Pharmaceutical Abstracts, ASSIA, PsycINFO, British Nursing Index, CINAHL, Health Management Information Consortium and Social Science Citations Index) were searched between 1985 and May 2013. Inclusion and exclusion criteria were applied to identify eligible publications through title analysis followed by abstract and then full text examination. English language publications reporting empirical data on causes of MAEs were included. Reference lists of included articles and relevant review papers were hand searched for additional studies. Studies were excluded if they did not report data on specific MAEs, used accounts from individuals not directly involved in the MAE concerned or were presented as conference abstracts with insufficient detail. A total of 54 unique studies were included. Causes of MAEs were categorised according to Reason's model of accident causation. Studies were assessed to determine relevance to the research question and how likely the results were to reflect the potential underlying causes of MAEs based on the method(s) used. Slips and lapses were the most commonly reported unsafe acts, followed by knowledge-based mistakes and deliberate violations. Error-provoking conditions influencing administration errors included inadequate written communication (prescriptions, documentation, transcription), problems with medicines supply and storage (pharmacy dispensing errors and ward stock management), high perceived workload, problems with ward-based equipment (access, functionality

  1. Acoustic holography as a metrological tool for characterizing medical ultrasound sources and fields

    Science.gov (United States)

    Sapozhnikov, Oleg A.; Tsysar, Sergey A.; Khokhlova, Vera A.; Kreider, Wayne

    2015-01-01

    Acoustic holography is a powerful technique for characterizing ultrasound sources and the fields they radiate, with the ability to quantify source vibrations and reduce the number of required measurements. These capabilities are increasingly appealing for meeting measurement standards in medical ultrasound; however, associated uncertainties have not been investigated systematically. Here errors associated with holographic representations of a linear, continuous-wave ultrasound field are studied. To facilitate the analysis, error metrics are defined explicitly, and a detailed description of a holography formulation based on the Rayleigh integral is provided. Errors are evaluated both for simulations of a typical therapeutic ultrasound source and for physical experiments with three different ultrasound sources. Simulated experiments explore sampling errors introduced by the use of a finite number of measurements, geometric uncertainties in the actual positions of acquired measurements, and uncertainties in the properties of the propagation medium. Results demonstrate the theoretical feasibility of keeping errors less than about 1%. Typical errors in physical experiments were somewhat larger, on the order of a few percent; comparison with simulations provides specific guidelines for improving the experimental implementation to reduce these errors. Overall, results suggest that holography can be implemented successfully as a metrological tool with small, quantifiable errors. PMID:26428789

  2. Acoustic holography as a metrological tool for characterizing medical ultrasound sources and fields.

    Science.gov (United States)

    Sapozhnikov, Oleg A; Tsysar, Sergey A; Khokhlova, Vera A; Kreider, Wayne

    2015-09-01

    Acoustic holography is a powerful technique for characterizing ultrasound sources and the fields they radiate, with the ability to quantify source vibrations and reduce the number of required measurements. These capabilities are increasingly appealing for meeting measurement standards in medical ultrasound; however, associated uncertainties have not been investigated systematically. Here errors associated with holographic representations of a linear, continuous-wave ultrasound field are studied. To facilitate the analysis, error metrics are defined explicitly, and a detailed description of a holography formulation based on the Rayleigh integral is provided. Errors are evaluated both for simulations of a typical therapeutic ultrasound source and for physical experiments with three different ultrasound sources. Simulated experiments explore sampling errors introduced by the use of a finite number of measurements, geometric uncertainties in the actual positions of acquired measurements, and uncertainties in the properties of the propagation medium. Results demonstrate the theoretical feasibility of keeping errors less than about 1%. Typical errors in physical experiments were somewhat larger, on the order of a few percent; comparison with simulations provides specific guidelines for improving the experimental implementation to reduce these errors. Overall, results suggest that holography can be implemented successfully as a metrological tool with small, quantifiable errors.

  3. Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions.

    Directory of Open Access Journals (Sweden)

    Emma Wells

    Full Text Available To prevent transmission in Ebola Virus Disease (EVD outbreaks, it is recommended to disinfect living things (hands and people with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH, sodium dichloroisocyanurate (NaDCC, and sodium hypochlorite (NaOCl have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1 determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2 conducting volunteer testing to assess ease-of-use; and, 3 determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method, then DPD dilution methods (2.4-19% error, then test strips (5.2-48% error; precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources, and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed. Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5-11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14-37 for test strips and $33-609 for titration

  4. Groundwater Pollution Source Identification using Linked ANN-Optimization Model

    Science.gov (United States)

    Ayaz, Md; Srivastava, Rajesh; Jain, Ashu

    2014-05-01

    Groundwater is the principal source of drinking water in several parts of the world. Contamination of groundwater has become a serious health and environmental problem today. Human activities including industrial and agricultural activities are generally responsible for this contamination. Identification of groundwater pollution source is a major step in groundwater pollution remediation. Complete knowledge of pollution source in terms of its source characteristics is essential to adopt an effective remediation strategy. Groundwater pollution source is said to be identified completely when the source characteristics - location, strength and release period - are known. Identification of unknown groundwater pollution source is an ill-posed inverse problem. It becomes more difficult for real field conditions, when the lag time between the first reading at observation well and the time at which the source becomes active is not known. We developed a linked ANN-Optimization model for complete identification of an unknown groundwater pollution source. The model comprises two parts- an optimization model and an ANN model. Decision variables of linked ANN-Optimization model contain source location and release period of pollution source. An objective function is formulated using the spatial and temporal data of observed and simulated concentrations, and then minimized to identify the pollution source parameters. In the formulation of the objective function, we require the lag time which is not known. An ANN model with one hidden layer is trained using Levenberg-Marquardt algorithm to find the lag time. Different combinations of source locations and release periods are used as inputs and lag time is obtained as the output. Performance of the proposed model is evaluated for two and three dimensional case with error-free and erroneous data. Erroneous data was generated by adding uniformly distributed random error (error level 0-10%) to the analytically computed concentration

  5. A continuous quality improvement project to reduce medication error in the emergency department.

    Science.gov (United States)

    Lee, Sara Bc; Lee, Larry Ly; Yeung, Richard Sd; Chan, Jimmy Ts

    2013-01-01

    Medication errors are a common source of adverse healthcare incidents particularly in the emergency department (ED) that has a number of factors that make it prone to medication errors. This project aims to reduce medication errors and improve the health and economic outcomes of clinical care in Hong Kong ED. In 2009, a task group was formed to identify problems that potentially endanger medication safety and developed strategies to eliminate these problems. Responsible officers were assigned to look after seven error-prone areas. Strategies were proposed, discussed, endorsed and promulgated to eliminate the problems identified. A reduction of medication incidents (MI) from 16 to 6 was achieved before and after the improvement work. This project successfully established a concrete organizational structure to safeguard error-prone areas of medication safety in a sustainable manner.

  6. SimCommSys: taking the errors out of error-correcting code simulations

    Directory of Open Access Journals (Sweden)

    Johann A. Briffa

    2014-06-01

    Full Text Available In this study, we present SimCommSys, a simulator of communication systems that we are releasing under an open source license. The core of the project is a set of C + + libraries defining communication system components and a distributed Monte Carlo simulator. Of principal interest is the error-control coding component, where various kinds of binary and non-binary codes are implemented, including turbo, LDPC, repeat-accumulate and Reed–Solomon. The project also contains a number of ready-to-build binaries implementing various stages of the communication system (such as the encoder and decoder, a complete simulator and a system benchmark. Finally, SimCommSys also provides a number of shell and python scripts to encapsulate routine use cases. As long as the required components are already available in SimCommSys, the user may simulate complete communication systems of their own design without any additional programming. The strict separation of development (needed only to implement new components and use (to simulate specific constructions encourages reproducibility of experimental work and reduces the likelihood of error. Following an overview of the framework, we provide some examples of how to use the framework, including the implementation of a simple codec, the specification of communication systems and their simulation.

  7. MEASUREMENT ERROR EFFECT ON THE POWER OF CONTROL CHART FOR ZERO-TRUNCATED POISSON DISTRIBUTION

    Directory of Open Access Journals (Sweden)

    Ashit Chakraborty

    2013-09-01

    Full Text Available Measurement error is the difference between the true value and the measured value of a quantity that exists in practice and may considerably affect the performance of control charts in some cases. Measurement error variability has uncertainty which can be from several sources. In this paper, we have studied the effect of these sources of variability on the power characteristics of control chart and obtained the values of average run length (ARL for zero-truncated Poisson distribution (ZTPD. Expression of the power of control chart for variable sample size under standardized normal variate for ZTPD is also derived.

  8. GY SAMPLING THEORY IN ENVIRONMENTAL STUDIES 2: SUBSAMPLING ERROR MEASUREMENTS

    Science.gov (United States)

    Sampling can be a significant source of error in the measurement process. The characterization and cleanup of hazardous waste sites require data that meet site-specific levels of acceptable quality if scientifically supportable decisions are to be made. In support of this effort,...

  9. A platform-independent method for detecting errors in metagenomic sequencing data: DRISEE.

    Directory of Open Access Journals (Sweden)

    Kevin P Keegan

    Full Text Available We provide a novel method, DRISEE (duplicate read inferred sequencing error estimation, to assess sequencing quality (alternatively referred to as "noise" or "error" within and/or between sequencing samples. DRISEE provides positional error estimates that can be used to inform read trimming within a sample. It also provides global (whole sample error estimates that can be used to identify samples with high or varying levels of sequencing error that may confound downstream analyses, particularly in the case of studies that utilize data from multiple sequencing samples. For shotgun metagenomic data, we believe that DRISEE provides estimates of sequencing error that are more accurate and less constrained by technical limitations than existing methods that rely on reference genomes or the use of scores (e.g. Phred. Here, DRISEE is applied to (non amplicon data sets from both the 454 and Illumina platforms. The DRISEE error estimate is obtained by analyzing sets of artifactual duplicate reads (ADRs, a known by-product of both sequencing platforms. We present DRISEE as an open-source, platform-independent method to assess sequencing error in shotgun metagenomic data, and utilize it to discover previously uncharacterized error in de novo sequence data from the 454 and Illumina sequencing platforms.

  10. Error studies for SNS Linac. Part 1: Transverse errors

    International Nuclear Information System (INIS)

    Crandall, K.R.

    1998-01-01

    The SNS linac consist of a radio-frequency quadrupole (RFQ), a drift-tube linac (DTL), a coupled-cavity drift-tube linac (CCDTL) and a coupled-cavity linac (CCL). The RFQ and DTL are operated at 402.5 MHz; the CCDTL and CCL are operated at 805 MHz. Between the RFQ and DTL is a medium-energy beam-transport system (MEBT). This error study is concerned with the DTL, CCDTL and CCL, and each will be analyzed separately. In fact, the CCL is divided into two sections, and each of these will be analyzed separately. The types of errors considered here are those that affect the transverse characteristics of the beam. The errors that cause the beam center to be displaced from the linac axis are quad displacements and quad tilts. The errors that cause mismatches are quad gradient errors and quad rotations (roll)

  11. NxRepair: error correction in de novo sequence assembly using Nextera mate pairs

    Directory of Open Access Journals (Sweden)

    Rebecca R. Murphy

    2015-06-01

    Full Text Available Scaffolding errors and incorrect repeat disambiguation during de novo assembly can result in large scale misassemblies in draft genomes. Nextera mate pair sequencing data provide additional information to resolve assembly ambiguities during scaffolding. Here, we introduce NxRepair, an open source toolkit for error correction in de novo assemblies that uses Nextera mate pair libraries to identify and correct large-scale errors. We show that NxRepair can identify and correct large scaffolding errors, without use of a reference sequence, resulting in quantitative improvements in the assembly quality. NxRepair can be downloaded from GitHub or PyPI, the Python Package Index; a tutorial and user documentation are also available.

  12. Sources of Data and Expertise for Environmental Factors Relevant to Amphibious Operations

    National Research Council Canada - National Science Library

    Andrew, Colin

    2000-01-01

    .... Before embarking on a research program it seemed worthwhile to survey the institutions and personnel who already have expertise in the gathering and analysis of relevant environmental data types...

  13. Large-scale simulations of error-prone quantum computation devices

    International Nuclear Information System (INIS)

    Trieu, Doan Binh

    2009-01-01

    The theoretical concepts of quantum computation in the idealized and undisturbed case are well understood. However, in practice, all quantum computation devices do suffer from decoherence effects as well as from operational imprecisions. This work assesses the power of error-prone quantum computation devices using large-scale numerical simulations on parallel supercomputers. We present the Juelich Massively Parallel Ideal Quantum Computer Simulator (JUMPIQCS), that simulates a generic quantum computer on gate level. It comprises an error model for decoherence and operational errors. The robustness of various algorithms in the presence of noise has been analyzed. The simulation results show that for large system sizes and long computations it is imperative to actively correct errors by means of quantum error correction. We implemented the 5-, 7-, and 9-qubit quantum error correction codes. Our simulations confirm that using error-prone correction circuits with non-fault-tolerant quantum error correction will always fail, because more errors are introduced than being corrected. Fault-tolerant methods can overcome this problem, provided that the single qubit error rate is below a certain threshold. We incorporated fault-tolerant quantum error correction techniques into JUMPIQCS using Steane's 7-qubit code and determined this threshold numerically. Using the depolarizing channel as the source of decoherence, we find a threshold error rate of (5.2±0.2) x 10 -6 . For Gaussian distributed operational over-rotations the threshold lies at a standard deviation of 0.0431±0.0002. We can conclude that quantum error correction is especially well suited for the correction of operational imprecisions and systematic over-rotations. For realistic simulations of specific quantum computation devices we need to extend the generic model to dynamic simulations, i.e. time-dependent Hamiltonian simulations of realistic hardware models. We focus on today's most advanced technology, i

  14. On the Interpretation of Instrumental Variables in the Presence of Specification Errors

    Directory of Open Access Journals (Sweden)

    P.A.V.B. Swamy

    2015-01-01

    Full Text Available The method of instrumental variables (IV and the generalized method of moments (GMM, and their applications to the estimation of errors-in-variables and simultaneous equations models in econometrics, require data on a sufficient number of instrumental variables that are both exogenous and relevant. We argue that, in general, such instruments (weak or strong cannot exist.

  15. ERP Reliability Analysis (ERA) Toolbox: An open-source toolbox for analyzing the reliability of event-related brain potentials.

    Science.gov (United States)

    Clayson, Peter E; Miller, Gregory A

    2017-01-01

    Generalizability theory (G theory) provides a flexible, multifaceted approach to estimating score reliability. G theory's approach to estimating score reliability has important advantages over classical test theory that are relevant for research using event-related brain potentials (ERPs). For example, G theory does not require parallel forms (i.e., equal means, variances, and covariances), can handle unbalanced designs, and provides a single reliability estimate for designs with multiple sources of error. This monograph provides a detailed description of the conceptual framework of G theory using examples relevant to ERP researchers, presents the algorithms needed to estimate ERP score reliability, and provides a detailed walkthrough of newly-developed software, the ERP Reliability Analysis (ERA) Toolbox, that calculates score reliability using G theory. The ERA Toolbox is open-source, Matlab software that uses G theory to estimate the contribution of the number of trials retained for averaging, group, and/or event types on ERP score reliability. The toolbox facilitates the rigorous evaluation of psychometric properties of ERP scores recommended elsewhere in this special issue. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Two-dimensional errors

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements

  17. On the meniscus formation and the negative hydrogen ion extraction from ITER neutral beam injection relevant ion source

    Science.gov (United States)

    Mochalskyy, S.; Wünderlich, D.; Ruf, B.; Fantz, U.; Franzen, P.; Minea, T.

    2014-10-01

    The development of a large area (Asource,ITER = 0.9 × 2 m2) hydrogen negative ion (NI) source constitutes a crucial step in construction of the neutral beam injectors of the international fusion reactor ITER. To understand the plasma behaviour in the boundary layer close to the extraction system the 3D PIC MCC code ONIX is exploited. Direct cross checked analysis of the simulation and experimental results from the ITER-relevant BATMAN source testbed with a smaller area (Asource,BATMAN ≈ 0.32 × 0.59 m2) has been conducted for a low perveance beam, but for a full set of plasma parameters available. ONIX has been partially benchmarked by comparison to the results obtained using the commercial particle tracing code for positive ion extraction KOBRA3D. Very good agreement has been found in terms of meniscus position and its shape for simulations of different plasma densities. The influence of the initial plasma composition on the final meniscus structure was then investigated for NIs. As expected from the Child-Langmuir law, the results show that not only does the extraction potential play a crucial role on the meniscus formation, but also the initial plasma density and its electronegativity. For the given parameters, the calculated meniscus locates a few mm downstream of the plasma grid aperture provoking a direct NI extraction. Most of the surface produced NIs do not reach the plasma bulk, but move directly towards the extraction grid guided by the extraction field. Even for artificially increased electronegativity of the bulk plasma the extracted NI current from this region is low. This observation indicates a high relevance of the direct NI extraction. These calculations show that the extracted NI current from the bulk region is low even if a complete ion-ion plasma is assumed, meaning that direct extraction from surface produced ions should be present in order to obtain sufficiently high extracted NI current density. The calculated extracted currents, both ions

  18. Bayesian network models for error detection in radiotherapy plans

    International Nuclear Information System (INIS)

    Kalet, Alan M; Ford, Eric C; Phillips, Mark H; Gennari, John H

    2015-01-01

    The purpose of this study is to design and develop a probabilistic network for detecting errors in radiotherapy plans for use at the time of initial plan verification. Our group has initiated a multi-pronged approach to reduce these errors. We report on our development of Bayesian models of radiotherapy plans. Bayesian networks consist of joint probability distributions that define the probability of one event, given some set of other known information. Using the networks, we find the probability of obtaining certain radiotherapy parameters, given a set of initial clinical information. A low probability in a propagated network then corresponds to potential errors to be flagged for investigation. To build our networks we first interviewed medical physicists and other domain experts to identify the relevant radiotherapy concepts and their associated interdependencies and to construct a network topology. Next, to populate the network’s conditional probability tables, we used the Hugin Expert software to learn parameter distributions from a subset of de-identified data derived from a radiation oncology based clinical information database system. These data represent 4990 unique prescription cases over a 5 year period. Under test case scenarios with approximately 1.5% introduced error rates, network performance produced areas under the ROC curve of 0.88, 0.98, and 0.89 for the lung, brain and female breast cancer error detection networks, respectively. Comparison of the brain network to human experts performance (AUC of 0.90 ± 0.01) shows the Bayes network model performs better than domain experts under the same test conditions. Our results demonstrate the feasibility and effectiveness of comprehensive probabilistic models as part of decision support systems for improved detection of errors in initial radiotherapy plan verification procedures. (paper)

  19. What is the functional relevance of prefrontal cortex entrainment to hippocampal theta rhythms?

    Directory of Open Access Journals (Sweden)

    James Michael Hyman

    2011-03-01

    Full Text Available There has been considerable interest in the importance of oscillations in the brain and in how these oscillations relate to the firing of single neurons. Recently a number of studies have shown that the spiking of individual neurons in the medial prefrontal cortex (mPFC become entrained to the hippocampal (HPC theta rhythm. We recently showed that theta-entrained mPFC cells lost theta-entrainment specifically on error trials even though the firing rates of these cells did not change (Hyman et al., 2010. This implied that the level of HPC theta-entrainment of mPFC units was more predictive of trial outcome than differences in firing rates and that there is more information encoded by the mPFC on working memory tasks than can be accounted for by a simple rate code. Nevertheless, the functional meaning of mPFC entrainment to HPC theta remains a mystery. It is also unclear as to whether there are any differences in the nature of the information encoded by theta-entrained and non-entrained mPFC cells. In this review we discuss mPFC entrainment to HPC theta within the context of previous results as well as provide a more detailed analysis of the Hyman et al. (2010 data set. This re-analysis revealed that theta-entrained mPFC cells selectively encoded a variety of task relevant behaviors and stimuli while never theta-entrained mPFC cells were most strongly attuned to errors or the lack of expected rewards. In fact, these error responsive neurons were responsible for the error representations exhibited by the entire ensemble of mPFC neurons. A theta reset was also detected in the post-error period. While it is becoming increasingly evident that mPFC neurons exhibit correlates to virtually all cues and behaviors, perhaps phase-locking directs attention to the task-relevant representations required to solve a spatially based working memory task while the loss of theta-entrainment at the start of error trials may represent a shift of attention away from

  20. 2010 drug packaging review: identifying problems to prevent errors.

    Science.gov (United States)

    2011-06-01

    Prescrire's analyses showed that the quality of drug packaging in 2010 still left much to be desired. Potentially dangerous packaging remains a significant problem: unclear labelling is source of medication errors; dosing devices for some psychotropic drugs create a risk of overdose; child-proof caps are often lacking; and too many patient information leaflets are misleading or difficult to understand. Everything that is needed for safe drug packaging is available; it is now up to regulatory agencies and drug companies to act responsibly. In the meantime, health professionals can help their patients by learning to identify the pitfalls of drug packaging and providing safe information to help prevent medication errors.

  1. Evaluation of the sources of error in the linepack estimation of a natural gas pipeline

    Energy Technology Data Exchange (ETDEWEB)

    Marco, Fabio Capelassi Gavazzi de [Transportadora Brasileira Gasoduto Bolivia-Brasil S.A. (TBG), Rio de Janeiro, RJ (Brazil)

    2012-07-01

    The intent of this work is to explore the behavior of the random error associated with determination of linepack in a complex natural gas pipeline based on the effect introduced by the uncertainty of the different variables involved. There are many parameters involved in the determination of the gas inventory in a transmission pipeline: geometrical (diameter, length and elevation profile), operational (pressure, temperature and gas composition), environmental (ambient / ground temperature) and those dependent on the modeling assumptions (compressibility factor and heat transfer coefficient). Due to the extent of a natural gas pipeline and the vast amount of sensor involved it is infeasible to determine analytically the magnitude of resulting uncertainty in the linepack, thus this problem has been addressed using Monte Carlo Method. The approach consists of introducing random errors in the values of pressure, temperature and gas gravity that are employed in the determination of the linepack and verify its impact. Additionally, the errors associated with three different modeling assumptions to estimate the linepack are explored. The results reveal that pressure is the most critical variable while the temperature is the less critical. In regard to the different methods to estimate the linepack, deviations around 1.6% were verified among the methods. (author)

  2. Error-related anterior cingulate cortex activity and the prediction of conscious error awareness

    Directory of Open Access Journals (Sweden)

    Catherine eOrr

    2012-06-01

    Full Text Available Research examining the neural mechanisms associated with error awareness has consistently identified dorsal anterior cingulate activity (ACC as necessary but not predictive of conscious error detection. Two recent studies (Steinhauser and Yeung, 2010; Wessel et al. 2011 have found a contrary pattern of greater dorsal ACC activity (in the form of the error-related negativity during detected errors, but suggested that the greater activity may instead reflect task influences (e.g., response conflict, error probability and or individual variability (e.g., statistical power. We re-analyzed fMRI BOLD data from 56 healthy participants who had previously been administered the Error Awareness Task, a motor Go/No-go response inhibition task in which subjects make errors of commission of which they are aware (Aware errors, or unaware (Unaware errors. Consistent with previous data, the activity in a number of cortical regions was predictive of error awareness, including bilateral inferior parietal and insula cortices, however in contrast to previous studies, including our own smaller sample studies using the same task, error-related dorsal ACC activity was significantly greater during aware errors when compared to unaware errors. While the significantly faster RT for aware errors (compared to unaware was consistent with the hypothesis of higher response conflict increasing ACC activity, we could find no relationship between dorsal ACC activity and the error RT difference. The data suggests that individual variability in error awareness is associated with error-related dorsal ACC activity, and therefore this region may be important to conscious error detection, but it remains unclear what task and individual factors influence error awareness.

  3. Correcting for particle counting bias error in turbulent flow

    Science.gov (United States)

    Edwards, R. V.; Baratuci, W.

    1985-01-01

    An ideal seeding device is proposed generating particles that exactly follow the flow out are still a major source of error, i.e., with a particle counting bias wherein the probability of measuring velocity is a function of velocity. The error in the measured mean can be as much as 25%. Many schemes have been put forward to correct for this error, but there is not universal agreement as to the acceptability of any one method. In particular it is sometimes difficult to know if the assumptions required in the analysis are fulfilled by any particular flow measurement system. To check various correction mechanisms in an ideal way and to gain some insight into how to correct with the fewest initial assumptions, a computer simulation is constructed to simulate laser anemometer measurements in a turbulent flow. That simulator and the results of its use are discussed.

  4. [Event-related EEG potentials associated with error detection in psychiatric disorder: literature review].

    Science.gov (United States)

    Balogh, Lívia; Czobor, Pál

    2010-01-01

    Error-related bioelectric signals constitute a special subgroup of event-related potentials. Researchers have identified two evoked potential components to be closely related to error processing, namely error-related negativity (ERN) and error-positivity (Pe), and they linked these to specific cognitive functions. In our article first we give a brief description of these components, then based on the available literature, we review differences in error-related evoked potentials observed in patients across psychiatric disorders. The PubMed and Medline search engines were used in order to identify all relevant articles, published between 2000 and 2009. For the purpose of the current paper we reviewed publications summarizing results of clinical trials. Patients suffering from schizophrenia, anorexia nervosa or borderline personality disorder exhibited a decrease in the amplitude of error-negativity when compared with healthy controls, while in cases of depression and anxiety an increase in the amplitude has been observed. Some of the articles suggest specific personality variables, such as impulsivity, perfectionism, negative emotions or sensitivity to punishment to underlie these electrophysiological differences. Research in the field of error-related electric activity has come to the focus of psychiatry research only recently, thus the amount of available data is significantly limited. However, since this is a relatively new field of research, the results available at present are noteworthy and promising for future electrophysiological investigations in psychiatric disorders.

  5. Soft errors in dynamic random access memories - a basis for dosimetry

    International Nuclear Information System (INIS)

    Haque, A.K.M.M.; Yates, J.; Stevens, D.

    1986-01-01

    The soft error rates of a number of 64k and 256k dRAMs from several manufacturers have been measured, employing a MC 68000 microprocessor. For this 'accelerated test' procedure, a 37 kBq (1 μCi) 241 Am alpha emitting source was used. Both 64k and 256k devices exhibited widely differing error rates. It was generally observed that the spread of errors over a particular device/manufacturer was much smaller than the differences between device families and manufacturers. Bit line errors formed a significant part of the total for 64k dRAMs, whereas in 256k dRAMs cell errors dominated; the latter also showed an enhanced sensitivity to integrated dose leading to total failure, and a time-dependent recovery. Although several theoretical models explain soft error mechanisms and predict responses which are compatible with our experimental results, it is considered that microdosimetric and track structure methods should be applied to the problem for its better appreciation. Finally, attention is drawn to the need for further studies of dRAMs, with a view to their use as digital dosemeters. (author)

  6. Errors in otology.

    Science.gov (United States)

    Kartush, J M

    1996-11-01

    Practicing medicine successfully requires that errors in diagnosis and treatment be minimized. Malpractice laws encourage litigators to ascribe all medical errors to incompetence and negligence. There are, however, many other causes of unintended outcomes. This article describes common causes of errors and suggests ways to minimize mistakes in otologic practice. Widespread dissemination of knowledge about common errors and their precursors can reduce the incidence of their occurrence. Consequently, laws should be passed to allow for a system of non-punitive, confidential reporting of errors and "near misses" that can be shared by physicians nationwide.

  7. Quantification of human error and common-mode failures in man-machine systems

    International Nuclear Information System (INIS)

    Lisboa, J.J.

    1988-01-01

    Quantification of human performance, particularly the determination of human error, is essential for realistic assessment of overall system performance of man-machine systems. This paper presents an analysis of human errors in nuclear power plant systems when measured against common-mode failures (CMF). Human errors evaluated are improper testing, inadequate maintenance strategy, and miscalibration. The methodology presented in the paper represents a positive contribution to power plant systems availability by identifying sources of common-mode failure when operational functions are involved. It is also applicable to other complex systems such as chemical plants, aircraft and motor industries; in fact, any large man-created, man-machine system could be included

  8. A procedure for the significance testing of unmodeled errors in GNSS observations

    Science.gov (United States)

    Li, Bofeng; Zhang, Zhetao; Shen, Yunzhong; Yang, Ling

    2018-01-01

    It is a crucial task to establish a precise mathematical model for global navigation satellite system (GNSS) observations in precise positioning. Due to the spatiotemporal complexity of, and limited knowledge on, systematic errors in GNSS observations, some residual systematic errors would inevitably remain even after corrected with empirical model and parameterization. These residual systematic errors are referred to as unmodeled errors. However, most of the existing studies mainly focus on handling the systematic errors that can be properly modeled and then simply ignore the unmodeled errors that may actually exist. To further improve the accuracy and reliability of GNSS applications, such unmodeled errors must be handled especially when they are significant. Therefore, a very first question is how to statistically validate the significance of unmodeled errors. In this research, we will propose a procedure to examine the significance of these unmodeled errors by the combined use of the hypothesis tests. With this testing procedure, three components of unmodeled errors, i.e., the nonstationary signal, stationary signal and white noise, are identified. The procedure is tested by using simulated data and real BeiDou datasets with varying error sources. The results show that the unmodeled errors can be discriminated by our procedure with approximately 90% confidence. The efficiency of the proposed procedure is further reassured by applying the time-domain Allan variance analysis and frequency-domain fast Fourier transform. In summary, the spatiotemporally correlated unmodeled errors are commonly existent in GNSS observations and mainly governed by the residual atmospheric biases and multipath. Their patterns may also be impacted by the receiver.

  9. The Impact of Error-Management Climate, Error Type and Error Originator on Auditors’ Reporting Errors Discovered on Audit Work Papers

    NARCIS (Netherlands)

    A.H. Gold-Nöteberg (Anna); U. Gronewold (Ulfert); S. Salterio (Steve)

    2010-01-01

    textabstractWe examine factors affecting the auditor’s willingness to report their own or their peers’ self-discovered errors in working papers subsequent to detailed working paper review. Prior research has shown that errors in working papers are detected in the review process; however, such

  10. Learning from Errors

    OpenAIRE

    Martínez-Legaz, Juan Enrique; Soubeyran, Antoine

    2003-01-01

    We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.

  11. Unidentified point sources in the IRAS minisurvey

    Science.gov (United States)

    Houck, J. R.; Soifer, B. T.; Neugebauer, G.; Beichman, C. A.; Aumann, H. H.; Clegg, P. E.; Gillett, F. C.; Habing, H. J.; Hauser, M. G.; Low, F. J.

    1984-01-01

    Nine bright, point-like 60 micron sources have been selected from the sample of 8709 sources in the IRAS minisurvey. These sources have no counterparts in a variety of catalogs of nonstellar objects. Four objects have no visible counterparts, while five have faint stellar objects visible in the error ellipse. These sources do not resemble objects previously known to be bright infrared sources.

  12. WE-G-BRA-04: Common Errors and Deficiencies in Radiation Oncology Practice

    Energy Technology Data Exchange (ETDEWEB)

    Kry, S; Dromgoole, L; Alvarez, P; Lowenstein, J; Molineu, A; Taylor, P; Followill, D [UT MD Anderson Cancer Center, Houston, TX (United States)

    2015-06-15

    Purpose: Dosimetric errors in radiotherapy dose delivery lead to suboptimal treatments and outcomes. This work reviews the frequency and severity of dosimetric and programmatic errors identified by on-site audits performed by the IROC Houston QA center. Methods: IROC Houston on-site audits evaluate absolute beam calibration, relative dosimetry data compared to the treatment planning system data, and processes such as machine QA. Audits conducted from 2000-present were abstracted for recommendations, including type of recommendation and magnitude of error when applicable. Dosimetric recommendations corresponded to absolute dose errors >3% and relative dosimetry errors >2%. On-site audits of 1020 accelerators at 409 institutions were reviewed. Results: A total of 1280 recommendations were made (average 3.1/institution). The most common recommendation was for inadequate QA procedures per TG-40 and/or TG-142 (82% of institutions) with the most commonly noted deficiency being x-ray and electron off-axis constancy versus gantry angle. Dosimetrically, the most common errors in relative dosimetry were in small-field output factors (59% of institutions), wedge factors (33% of institutions), off-axis factors (21% of institutions), and photon PDD (18% of institutions). Errors in calibration were also problematic: 20% of institutions had an error in electron beam calibration, 8% had an error in photon beam calibration, and 7% had an error in brachytherapy source calibration. Almost all types of data reviewed included errors up to 7% although 20 institutions had errors in excess of 10%, and 5 had errors in excess of 20%. The frequency of electron calibration errors decreased significantly with time, but all other errors show non-significant changes. Conclusion: There are many common and often serious errors made during the establishment and maintenance of a radiotherapy program that can be identified through independent peer review. Physicists should be cautious, particularly

  13. WE-G-BRA-04: Common Errors and Deficiencies in Radiation Oncology Practice

    International Nuclear Information System (INIS)

    Kry, S; Dromgoole, L; Alvarez, P; Lowenstein, J; Molineu, A; Taylor, P; Followill, D

    2015-01-01

    Purpose: Dosimetric errors in radiotherapy dose delivery lead to suboptimal treatments and outcomes. This work reviews the frequency and severity of dosimetric and programmatic errors identified by on-site audits performed by the IROC Houston QA center. Methods: IROC Houston on-site audits evaluate absolute beam calibration, relative dosimetry data compared to the treatment planning system data, and processes such as machine QA. Audits conducted from 2000-present were abstracted for recommendations, including type of recommendation and magnitude of error when applicable. Dosimetric recommendations corresponded to absolute dose errors >3% and relative dosimetry errors >2%. On-site audits of 1020 accelerators at 409 institutions were reviewed. Results: A total of 1280 recommendations were made (average 3.1/institution). The most common recommendation was for inadequate QA procedures per TG-40 and/or TG-142 (82% of institutions) with the most commonly noted deficiency being x-ray and electron off-axis constancy versus gantry angle. Dosimetrically, the most common errors in relative dosimetry were in small-field output factors (59% of institutions), wedge factors (33% of institutions), off-axis factors (21% of institutions), and photon PDD (18% of institutions). Errors in calibration were also problematic: 20% of institutions had an error in electron beam calibration, 8% had an error in photon beam calibration, and 7% had an error in brachytherapy source calibration. Almost all types of data reviewed included errors up to 7% although 20 institutions had errors in excess of 10%, and 5 had errors in excess of 20%. The frequency of electron calibration errors decreased significantly with time, but all other errors show non-significant changes. Conclusion: There are many common and often serious errors made during the establishment and maintenance of a radiotherapy program that can be identified through independent peer review. Physicists should be cautious, particularly

  14. Error Analysis of Indirect Broadband Monitoring of Multilayer Optical Coatings using Computer Simulations

    Science.gov (United States)

    Semenov, Z. V.; Labusov, V. A.

    2017-11-01

    Results of studying the errors of indirect monitoring by means of computer simulations are reported. The monitoring method is based on measuring spectra of reflection from additional monitoring substrates in a wide spectral range. Special software (Deposition Control Simulator) is developed, which allows one to estimate the influence of the monitoring system parameters (noise of the photodetector array, operating spectral range of the spectrometer and errors of its calibration in terms of wavelengths, drift of the radiation source intensity, and errors in the refractive index of deposited materials) on the random and systematic errors of deposited layer thickness measurements. The direct and inverse problems of multilayer coatings are solved using the OptiReOpt library. Curves of the random and systematic errors of measurements of the deposited layer thickness as functions of the layer thickness are presented for various values of the system parameters. Recommendations are given on using the indirect monitoring method for the purpose of reducing the layer thickness measurement error.

  15. Pase de guardia: información relevante y toma de decisiones en clínica médica: Estudio prospectivo Handoffs: relevant information and decision making in internal medicine: A prospective study

    Directory of Open Access Journals (Sweden)

    Andrea Vázquez

    2011-09-01

    Full Text Available Introducción. El pase de guardia es una actividad médica en la que se transfiere información y responsabilidad entre profesionales en situaciones de discontinuidad o transiciones en el cuidado de los pacientes. Los pases de guardia son fuente de errores médicos, a pesar de lo cual la programación formal en la competencia específica está ausente en los currículos de las residencias médicas. En este sentido, implementamos el proyecto educativo 'Pase de guardia oral y escrito en la residencia de clínica médica'. Materiales y métodos. Definimos el constructo 'información relevante' a partir de cinco ítems, uno sistémico y cuatro cognitivos. Se analizó la prevalencia de los déficits de información relevante y su repercusión sobre la práctica clínica. Resultados. En 230 protocolos de guardia, la prevalencia de déficits de información relevante fue del 31,3% (n = 72 y afectó tanto al ítem sistémico (11% como a los ítems con contenidos sustantivos (20%. Con información relevante, las conductas activas fueron del 34,6%, y las pasivas, del 65,4%; con déficits de información relevante, las activas fueron del 13,9%, y las pasivas, del 86,1%. Estas diferencias fueron significativas (p Introduction. Handoffs are medical activity which transfers information and responsibility among professionals in situations of discontinuity or transitions in patient care. Handoffs are source of medical errors and adverse events, which despite the formal programming of specific competencies are absent in the curricula of medical residencies. In this sense, we implemented the educational project 'Oral and written handoffs in internal medicine residency program'. Materials and methods. We defined the parameter relevant information with a systemic item and four other cognitive items; we assess the prevalence of relevant information deficits and the effects on the clinical practice in a prospective study. Results. In 230 protocols the prevalence of

  16. Passage relevance models for genomics search

    Directory of Open Access Journals (Sweden)

    Frieder Ophir

    2009-03-01

    Full Text Available Abstract We present a passage relevance model for integrating syntactic and semantic evidence of biomedical concepts and topics using a probabilistic graphical model. Component models of topics, concepts, terms, and document are represented as potential functions within a Markov Random Field. The probability of a passage being relevant to a biologist's information need is represented as the joint distribution across all potential functions. Relevance model feedback of top ranked passages is used to improve distributional estimates of query concepts and topics in context, and a dimensional indexing strategy is used for efficient aggregation of concept and term statistics. By integrating multiple sources of evidence including dependencies between topics, concepts, and terms, we seek to improve genomics literature passage retrieval precision. Using this model, we are able to demonstrate statistically significant improvements in retrieval precision using a large genomics literature corpus.

  17. Consistent errors in first strand cDNA due to random hexamer mispriming.

    Directory of Open Access Journals (Sweden)

    Thomas P van Gurp

    Full Text Available Priming of random hexamers in cDNA synthesis is known to show sequence bias, but in addition it has been suggested recently that mismatches in random hexamer priming could be a cause of mismatches between the original RNA fragment and observed sequence reads. To explore random hexamer mispriming as a potential source of these errors, we analyzed two independently generated RNA-seq datasets of synthetic ERCC spikes for which the reference is known. First strand cDNA synthesized by random hexamer priming on RNA showed consistent position and nucleotide-specific mismatch errors in the first seven nucleotides. The mismatch errors found in both datasets are consistent in distribution and thermodynamically stable mismatches are more common. This strongly indicates that RNA-DNA mispriming of specific random hexamers causes these errors. Due to their consistency and specificity, mispriming errors can have profound implications for downstream applications if not dealt with properly.

  18. Extracting the relevant delays in time series modelling

    DEFF Research Database (Denmark)

    Goutte, Cyril

    1997-01-01

    selection, and more precisely stepwise forward selection. The method is compared to other forward selection schemes, as well as to a nonparametric tests aimed at estimating the embedding dimension of time series. The final application extends these results to the efficient estimation of FIR filters on some......In this contribution, we suggest a convenient way to use generalisation error to extract the relevant delays from a time-varying process, i.e. the delays that lead to the best prediction performance. We design a generalisation-based algorithm that takes its inspiration from traditional variable...

  19. The computation of equating errors in international surveys in education.

    Science.gov (United States)

    Monseur, Christian; Berezner, Alla

    2007-01-01

    Since the IEA's Third International Mathematics and Science Study, one of the major objectives of international surveys in education has been to report trends in achievement. The names of the two current IEA surveys reflect this growing interest: Trends in International Mathematics and Science Study (TIMSS) and Progress in International Reading Literacy Study (PIRLS). Similarly a central concern of the OECD's PISA is with trends in outcomes over time. To facilitate trend analyses these studies link their tests using common item equating in conjunction with item response modelling methods. IEA and PISA policies differ in terms of reporting the error associated with trends. In IEA surveys, the standard errors of the trend estimates do not include the uncertainty associated with the linking step while PISA does include a linking error component in the standard errors of trend estimates. In other words, PISA implicitly acknowledges that trend estimates partly depend on the selected common items, while the IEA's surveys do not recognise this source of error. Failing to recognise the linking error leads to an underestimation of the standard errors and thus increases the Type I error rate, thereby resulting in reporting of significant changes in achievement when in fact these are not significant. The growing interest of policy makers in trend indicators and the impact of the evaluation of educational reforms appear to be incompatible with such underestimation. However, the procedure implemented by PISA raises a few issues about the underlying assumptions for the computation of the equating error. After a brief introduction, this paper will describe the procedure PISA implemented to compute the linking error. The underlying assumptions of this procedure will then be discussed. Finally an alternative method based on replication techniques will be presented, based on a simulation study and then applied to the PISA 2000 data.

  20. Analysis and improvement of gas turbine blade temperature measurement error

    International Nuclear Information System (INIS)

    Gao, Shan; Wang, Lixin; Feng, Chi; Daniel, Ketui

    2015-01-01

    Gas turbine blade components are easily damaged; they also operate in harsh high-temperature, high-pressure environments over extended durations. Therefore, ensuring that the blade temperature remains within the design limits is very important. In this study, measurement errors in turbine blade temperatures were analyzed, taking into account detector lens contamination, the reflection of environmental energy from the target surface, the effects of the combustion gas, and the emissivity of the blade surface. In this paper, each of the above sources of measurement error is discussed, and an iterative computing method for calculating blade temperature is proposed. (paper)

  1. Analysis and improvement of gas turbine blade temperature measurement error

    Science.gov (United States)

    Gao, Shan; Wang, Lixin; Feng, Chi; Daniel, Ketui

    2015-10-01

    Gas turbine blade components are easily damaged; they also operate in harsh high-temperature, high-pressure environments over extended durations. Therefore, ensuring that the blade temperature remains within the design limits is very important. In this study, measurement errors in turbine blade temperatures were analyzed, taking into account detector lens contamination, the reflection of environmental energy from the target surface, the effects of the combustion gas, and the emissivity of the blade surface. In this paper, each of the above sources of measurement error is discussed, and an iterative computing method for calculating blade temperature is proposed.

  2. Sources of dispersion in industrial radiography

    International Nuclear Information System (INIS)

    Ruault, P.A.

    1985-01-01

    Maximum value of divergence produced by the different parameters are summed up. Parameters examined include the emulsion (aging, processing), the source (filtration, energy, angle), the dosimetry, the object and the print. In the case of testing in the same conditions errors are relatively low but for testing the same object after a long time errors can be important [fr

  3. Dissociating response conflict and error likelihood in anterior cingulate cortex.

    Science.gov (United States)

    Yeung, Nick; Nieuwenhuis, Sander

    2009-11-18

    Neuroimaging studies consistently report activity in anterior cingulate cortex (ACC) in conditions of high cognitive demand, leading to the view that ACC plays a crucial role in the control of cognitive processes. According to one prominent theory, the sensitivity of ACC to task difficulty reflects its role in monitoring for the occurrence of competition, or "conflict," between responses to signal the need for increased cognitive control. However, a contrasting theory proposes that ACC is the recipient rather than source of monitoring signals, and that ACC activity observed in relation to task demand reflects the role of this region in learning about the likelihood of errors. Response conflict and error likelihood are typically confounded, making the theories difficult to distinguish empirically. The present research therefore used detailed computational simulations to derive contrasting predictions regarding ACC activity and error rate as a function of response speed. The simulations demonstrated a clear dissociation between conflict and error likelihood: fast response trials are associated with low conflict but high error likelihood, whereas slow response trials show the opposite pattern. Using the N2 component as an index of ACC activity, an EEG study demonstrated that when conflict and error likelihood are dissociated in this way, ACC activity tracks conflict and is negatively correlated with error likelihood. These findings support the conflict-monitoring theory and suggest that, in speeded decision tasks, ACC activity reflects current task demands rather than the retrospective coding of past performance.

  4. Dissociable genetic contributions to error processing: a multimodal neuroimaging study.

    Directory of Open Access Journals (Sweden)

    Yigal Agam

    Full Text Available Neuroimaging studies reliably identify two markers of error commission: the error-related negativity (ERN, an event-related potential, and functional MRI activation of the dorsal anterior cingulate cortex (dACC. While theorized to reflect the same neural process, recent evidence suggests that the ERN arises from the posterior cingulate cortex not the dACC. Here, we tested the hypothesis that these two error markers also have different genetic mediation.We measured both error markers in a sample of 92 comprised of healthy individuals and those with diagnoses of schizophrenia, obsessive-compulsive disorder or autism spectrum disorder. Participants performed the same task during functional MRI and simultaneously acquired magnetoencephalography and electroencephalography. We examined the mediation of the error markers by two single nucleotide polymorphisms: dopamine D4 receptor (DRD4 C-521T (rs1800955, which has been associated with the ERN and methylenetetrahydrofolate reductase (MTHFR C677T (rs1801133, which has been associated with error-related dACC activation. We then compared the effects of each polymorphism on the two error markers modeled as a bivariate response.We replicated our previous report of a posterior cingulate source of the ERN in healthy participants in the schizophrenia and obsessive-compulsive disorder groups. The effect of genotype on error markers did not differ significantly by diagnostic group. DRD4 C-521T allele load had a significant linear effect on ERN amplitude, but not on dACC activation, and this difference was significant. MTHFR C677T allele load had a significant linear effect on dACC activation but not ERN amplitude, but the difference in effects on the two error markers was not significant.DRD4 C-521T, but not MTHFR C677T, had a significant differential effect on two canonical error markers. Together with the anatomical dissociation between the ERN and error-related dACC activation, these findings suggest that

  5. Learning from Past Classification Errors: Exploring Methods for Improving the Performance of a Deep Learning-based Building Extraction Model through Quantitative Analysis of Commission Errors for Optimal Sample Selection

    Science.gov (United States)

    Swan, B.; Laverdiere, M.; Yang, L.

    2017-12-01

    In the past five years, deep Convolutional Neural Networks (CNN) have been increasingly favored for computer vision applications due to their high accuracy and ability to generalize well in very complex problems; however, details of how they function and in turn how they may be optimized are still imperfectly understood. In particular, their complex and highly nonlinear network architecture, including many hidden layers and self-learned parameters, as well as their mathematical implications, presents open questions about how to effectively select training data. Without knowledge of the exact ways the model processes and transforms its inputs, intuition alone may fail as a guide to selecting highly relevant training samples. Working in the context of improving a CNN-based building extraction model used for the LandScan USA gridded population dataset, we have approached this problem by developing a semi-supervised, highly-scalable approach to select training samples from a dataset of identified commission errors. Due to the large scope this project, tens of thousands of potential samples could be derived from identified commission errors. To efficiently trim those samples down to a manageable and effective set for creating additional training sample, we statistically summarized the spectral characteristics of areas with rates of commission errors at the image tile level and grouped these tiles using affinity propagation. Highly representative members of each commission error cluster were then used to select sites for training sample creation. The model will be incrementally re-trained with the new training data to allow for an assessment of how the addition of different types of samples affects the model performance, such as precision and recall rates. By using quantitative analysis and data clustering techniques to select highly relevant training samples, we hope to improve model performance in a manner that is resource efficient, both in terms of training process

  6. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  7. The Register-based Census in Germany: Historical Context and Relevance for Population Research

    Directory of Open Access Journals (Sweden)

    Rembrandt Scholz

    2016-08-01

    Full Text Available In 2011, Germany carried out its first census after a 20-year break. In light of the United Nations’ recommendations that countries initiate a population census at least every 10 years, the census was long overdue. Moreover, demographers had for some time been demanding a new enumeration that would enable them to place the calculation of demographic indicators on a reliable basis. With the 2011 census, Germany not only met the demand for a current population census, but also broke new ground by using a register-based approach. Unlike the Scandinavian countries, which have a long tradition of performing register-based data analyses, the linking of administrative data in Germany is restricted by the country’s legal framework. Thus, the 2011 census was an ambitious project. After contextualising the 2011 census historically, we discuss in this contribution the census’ relevance for generating central demographic data. Specifically, we compare the updated population estimates of the 1987 census to the results of the 2011 census in order to identify possible systematic sources of error that distort demographic indicators and analyses.

  8. Culture and error in space: implications from analog environments.

    Science.gov (United States)

    Helmreich, R L

    2000-09-01

    An ongoing study investigating national, organizational, and professional cultures in aviation and medicine is described. Survey data from 26 nations on 5 continents show highly significant national differences regarding appropriate relationships between leaders and followers, in group vs. individual orientation, and in values regarding adherence to rules and procedures. These findings replicate earlier research on dimensions of national culture. Data collected also isolate significant operational issues in multi-national flight crews. While there are no better or worse cultures, these cultural differences have operational implications for the way crews function in an international space environment. The positive professional cultures of pilots and physicians exhibit a high enjoyment of the job and professional pride. However, a negative component was also identified characterized by a sense of personal invulnerability regarding the effects of stress and fatigue on performance. This misperception of personal invulnerability has operational implications such as failures in teamwork and increased probability of error. A second component of the research examines team error in operational environments. From observational data collected during normal flight operations, new models of threat and error and their management were developed that can be generalized to operations in space and other socio-technological domains. Five categories of crew error are defined and their relationship to training programs in team performance, known generically as Crew Resource Management, is described. The relevance of these data for future spaceflight is discussed.

  9. A Nonlinear Adaptive Filter for Gyro Thermal Bias Error Cancellation

    Science.gov (United States)

    Galante, Joseph M.; Sanner, Robert M.

    2012-01-01

    Deterministic errors in angular rate gyros, such as thermal biases, can have a significant impact on spacecraft attitude knowledge. In particular, thermal biases are often the dominant error source in MEMS gyros after calibration. Filters, such as J\\,fEKFs, are commonly used to mitigate the impact of gyro errors and gyro noise on spacecraft closed loop pointing accuracy, but often have difficulty in rapidly changing thermal environments and can be computationally expensive. In this report an existing nonlinear adaptive filter is used as the basis for a new nonlinear adaptive filter designed to estimate and cancel thermal bias effects. A description of the filter is presented along with an implementation suitable for discrete-time applications. A simulation analysis demonstrates the performance of the filter in the presence of noisy measurements and provides a comparison with existing techniques.

  10. Chandra Source Catalog: User Interface

    Science.gov (United States)

    Bonaventura, Nina; Evans, Ian N.; Rots, Arnold H.; Tibbetts, Michael S.; van Stone, David W.; Zografou, Panagoula; Primini, Francis A.; Glotfelty, Kenny J.; Anderson, Craig S.; Chen, Judy C.; Davis, John E.; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G., II; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; He, Helen; Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph B.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Plummer, David A.; Refsdal, Brian L.; Siemiginowska, Aneta L.; Sundheim, Beth A.; Winkelman, Sherry L.

    2009-09-01

    The Chandra Source Catalog (CSC) is intended to be the definitive catalog of all X-ray sources detected by Chandra. For each source, the CSC provides positions and multi-band fluxes, as well as derived spatial, spectral, and temporal source properties. Full-field and source region data products are also available, including images, photon event lists, light curves, and spectra. The Chandra X-ray Center CSC website (http://cxc.harvard.edu/csc/) is the place to visit for high-level descriptions of each source property and data product included in the catalog, along with other useful information, such as step-by-step catalog tutorials, answers to FAQs, and a thorough summary of the catalog statistical characterization. Eight categories of detailed catalog documents may be accessed from the navigation bar on most of the 50+ CSC pages; these categories are: About the Catalog, Creating the Catalog, Using the Catalog, Catalog Columns, Column Descriptions, Documents, Conferences, and Useful Links. There are also prominent links to CSCview, the CSC data access GUI, and related help documentation, as well as a tutorial for using the new CSC/Google Earth interface. Catalog source properties are presented in seven scientific categories, within two table views: the Master Source and Source Observations tables. Each X-ray source has one ``master source'' entry and one or more ``source observation'' entries, the details of which are documented on the CSC ``Catalog Columns'' pages. The master source properties represent the best estimates of the properties of a source; these are extensively described on the following pages of the website: Position and Position Errors, Source Flags, Source Extent and Errors, Source Fluxes, Source Significance, Spectral Properties, and Source Variability. The eight tutorials (``threads'') available on the website serve as a collective guide for accessing, understanding, and manipulating the source properties and data products provided by the catalog.

  11. Real-Time Emulation of Nonstationary Channels in Safety-Relevant Vehicular Scenarios

    Directory of Open Access Journals (Sweden)

    Golsa Ghiaasi

    2018-01-01

    Full Text Available This paper proposes and discusses the architecture for a real-time vehicular channel emulator capable of reproducing the input/output behavior of nonstationary time-variant radio propagation channels in safety-relevant vehicular scenarios. The vehicular channel emulator architecture aims at a hardware implementation which requires minimal hardware complexity for emulating channels with the varying delay-Doppler characteristics of safety-relevant vehicular scenarios. The varying delay-Doppler characteristics require real-time updates to the multipath propagation model for each local stationarity region. The vehicular channel emulator is used for benchmarking the packet error performance of commercial off-the-shelf (COTS vehicular IEEE 802.11p modems and a fully software-defined radio-based IEEE 802.11p modem stack. The packet error ratio (PER estimated from temporal averaging over a single virtual drive and the packet error probability (PEP estimated from ensemble averaging over repeated virtual drives are evaluated and compared for the same vehicular scenario. The proposed architecture is realized as a virtual instrument on National Instruments™ LabVIEW. The National Instrument universal software radio peripheral with reconfigurable input-output (USRP-Rio 2953R is used as the software-defined radio platform for implementation; however, the results and considerations reported are of general purpose and can be applied to other platforms. Finally, we discuss the PER performance of the modem for two categories of vehicular channel models: a vehicular nonstationary channel model derived for urban single lane street crossing scenario of the DRIVEWAY’09 measurement campaign and the stationary ETSI models.

  12. "Using recruitment source timing and diagnosticity to enhance applicants' occupation-specific human capital": Correction to Campion, Ployhart, and Campion (2017).

    Science.gov (United States)

    2017-05-01

    Reports an error in "Using Recruitment Source Timing and Diagnosticity to Enhance Applicants' Occupation-Specific Human Capital" by Michael C. Campion, Robert E. Ployhart and Michael A. Campion ( Journal of Applied Psychology , Advanced Online Publication, Feb 02, 2017, np). In the article, the following headings were inadvertently set at the wrong level: Method, Participants and Procedure, Measures, Occupation specific human capital, Symbolic jobs, Relevant majors, Occupation-specific capital hotspots, Source timing, Source diagnosticity, Results, and Discussion. All versions of this article have been corrected. (The following abstract of the original article appeared in record 2017-04566-001.) This study proposes that reaching applicants through more diagnostic recruitment sources earlier in their educational development (e.g., in high school) can lead them to invest more in their occupation-specific human capital (OSHC), thereby making them higher quality candidates. Using a sample of 78,157 applicants applying for jobs within a desirable professional occupation in the public sector, results indicate that applicants who report hearing about the occupation earlier, and applicants who report hearing about the occupation through more diagnostic sources, have higher levels of OSHC upon application. Additionally, source timing and diagnosticity affect the likelihood of candidates applying for jobs symbolic of the occupation, selecting relevant majors, and attending educational institutions with top programs related to the occupation. These findings suggest a firm's recruiting efforts may influence applicants' OSHC investment strategies. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. A Hybrid Unequal Error Protection / Unequal Error Resilience ...

    African Journals Online (AJOL)

    The quality layers are then assigned an Unequal Error Resilience to synchronization loss by unequally allocating the number of headers available for synchronization to them. Following that Unequal Error Protection against channel noise is provided to the layers by the use of Rate Compatible Punctured Convolutional ...

  14. Release modes and processes relevant to source-term calculations at Yucca Mountain

    International Nuclear Information System (INIS)

    Apted, M.J.

    1994-01-01

    The feasibility of permanent disposal of radioactive high-level waste (HLW) in repositories located in deep geologic formations is being studied world-wide. The most credible release pathway is interaction between groundwater and nuclear waste forms, followed by migration of radionuclide-bearing groundwater to the accessible environment. Under hydrologically unsaturated conditions, vapor transport of volatile radionuclides is also possible. The near-field encompasses the waste packages composed of engineered barriers (e.g. man-made materials, such as vitrified waste forms, corrosion-resistant containers), while the far-field includes the natural barriers (e.g. host rock, hydrologic setting). Taken together, these two subsystems define a series of multiple, redundant barriers that act to assure the safe isolation of nuclear waste. In the U.S., the Department of energy (DOE) is investigating the feasibility of safe, long-term disposal of high-level nuclear waste at the Yucca Mountain site in Nevada. The proposed repository horizon is located in non-welded tuffs within the unsaturated zone (i.e. above the water table) at Yucca Mountain. The purpose of this paper is to describe the source-term models for radionuclide release from waste packages at Yucca Mountain site. The first section describes the conceptual release modes that are relevant for this site and waste package design, based on a consideration of the performance of currently proposed engineered barriers under expected and unexpected conditions. No attempt is made to asses the reasonableness nor probability of occurrence for any specific release mode. The following section reviews the waste-form characteristics that are required to model and constrain the release of radionuclides from the waste package. The next section present mathematical models for the conceptual release modes, selected from those that have been implemented into a probabilistic total system assessment code developed for the Electric Power

  15. Sources for charged particles; Les sources de particules chargees

    Energy Technology Data Exchange (ETDEWEB)

    Arianer, J.

    1997-09-01

    This document is a basic course on charged particle sources for post-graduate students and thematic schools on large facilities and accelerator physics. A simple but precise description of the creation and the emission of charged particles is presented. This course relies on every year upgraded reference documents. Following relevant topics are considered: electronic emission processes, technological and practical considerations on electron guns, positron sources, production of neutral atoms, ionization, plasma and discharge, different types of positive and negative ion sources, polarized particle sources, materials for the construction of ion sources, low energy beam production and transport. (N.T.).

  16. Inadequacies of Physical Examination as a Cause of Medical Errors and Adverse Events: A Collection of Vignettes.

    Science.gov (United States)

    Verghese, Abraham; Charlton, Blake; Kassirer, Jerome P; Ramsey, Meghan; Ioannidis, John P A

    2015-12-01

    Oversights in the physical examination are a type of medical error not easily studied by chart review. They may be a major contributor to missed or delayed diagnosis, unnecessary exposure to contrast and radiation, incorrect treatment, and other adverse consequences. Our purpose was to collect vignettes of physical examination oversights and to capture the diversity of their characteristics and consequences. A cross-sectional study using an 11-question qualitative survey for physicians was distributed electronically, with data collected from February to June of 2011. The participants were all physicians responding to e-mail or social media invitations to complete the survey. There were no limitations on geography, specialty, or practice setting. Of the 208 reported vignettes that met inclusion criteria, the oversight was caused by a failure to perform the physical examination in 63%; 14% reported that the correct physical examination sign was elicited but misinterpreted, whereas 11% reported that the relevant sign was missed or not sought. Consequence of the physical examination inadequacy included missed or delayed diagnosis in 76% of cases, incorrect diagnosis in 27%, unnecessary treatment in 18%, no or delayed treatment in 42%, unnecessary diagnostic cost in 25%, unnecessary exposure to radiation or contrast in 17%, and complications caused by treatments in 4%. The mode of the number of physicians missing the finding was 2, but many oversights were missed by many physicians. Most oversights took up to 5 days to identify, but 66 took longer. Special attention and skill in examining the skin and its appendages, as well as the abdomen, groin, and genitourinary area could reduce the reported oversights by half. Physical examination inadequacies are a preventable source of medical error, and adverse events are caused mostly by failure to perform the relevant examination. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Large-scale simulations of error-prone quantum computation devices

    Energy Technology Data Exchange (ETDEWEB)

    Trieu, Doan Binh

    2009-07-01

    The theoretical concepts of quantum computation in the idealized and undisturbed case are well understood. However, in practice, all quantum computation devices do suffer from decoherence effects as well as from operational imprecisions. This work assesses the power of error-prone quantum computation devices using large-scale numerical simulations on parallel supercomputers. We present the Juelich Massively Parallel Ideal Quantum Computer Simulator (JUMPIQCS), that simulates a generic quantum computer on gate level. It comprises an error model for decoherence and operational errors. The robustness of various algorithms in the presence of noise has been analyzed. The simulation results show that for large system sizes and long computations it is imperative to actively correct errors by means of quantum error correction. We implemented the 5-, 7-, and 9-qubit quantum error correction codes. Our simulations confirm that using error-prone correction circuits with non-fault-tolerant quantum error correction will always fail, because more errors are introduced than being corrected. Fault-tolerant methods can overcome this problem, provided that the single qubit error rate is below a certain threshold. We incorporated fault-tolerant quantum error correction techniques into JUMPIQCS using Steane's 7-qubit code and determined this threshold numerically. Using the depolarizing channel as the source of decoherence, we find a threshold error rate of (5.2{+-}0.2) x 10{sup -6}. For Gaussian distributed operational over-rotations the threshold lies at a standard deviation of 0.0431{+-}0.0002. We can conclude that quantum error correction is especially well suited for the correction of operational imprecisions and systematic over-rotations. For realistic simulations of specific quantum computation devices we need to extend the generic model to dynamic simulations, i.e. time-dependent Hamiltonian simulations of realistic hardware models. We focus on today's most advanced

  18. Modeling SMAP Spacecraft Attitude Control Estimation Error Using Signal Generation Model

    Science.gov (United States)

    Rizvi, Farheen

    2016-01-01

    Two ground simulation software are used to model the SMAP spacecraft dynamics. The CAST software uses a higher fidelity model than the ADAMS software. The ADAMS software models the spacecraft plant, controller and actuator models, and assumes a perfect sensor and estimator model. In this simulation study, the spacecraft dynamics results from the ADAMS software are used as CAST software is unavailable. The main source of spacecraft dynamics error in the higher fidelity CAST software is due to the estimation error. A signal generation model is developed to capture the effect of this estimation error in the overall spacecraft dynamics. Then, this signal generation model is included in the ADAMS software spacecraft dynamics estimate such that the results are similar to CAST. This signal generation model has similar characteristics mean, variance and power spectral density as the true CAST estimation error. In this way, ADAMS software can still be used while capturing the higher fidelity spacecraft dynamics modeling from CAST software.

  19. Learning from prescribing errors

    OpenAIRE

    Dean, B

    2002-01-01

    

 The importance of learning from medical error has recently received increasing emphasis. This paper focuses on prescribing errors and argues that, while learning from prescribing errors is a laudable goal, there are currently barriers that can prevent this occurring. Learning from errors can take place on an individual level, at a team level, and across an organisation. Barriers to learning from prescribing errors include the non-discovery of many prescribing errors, lack of feedback to th...

  20. Categorization of radiation sources

    International Nuclear Information System (INIS)

    2000-12-01

    The objective of this report is to develop a categorization scheme for radiation sources that could be relevant to decisions both in a retrospective application to bring sources under control and in a prospective sense to guide the application of the regulatory infrastructure. The Action Plan envisages that the preparation of guidance on national strategies and programmes for the detection and location of orphan sources and their subsequent management should commence after the categorization of sources has been carried out. In the prospective application of the system of notification, registration, and licensing, the categorization is relevant to prioritize a regulatory authority's resources and training activities; to guide the degree of detail necessary for a safety assessment; and to serve as a measure of the intensity of effort which a regulatory authority should apply to the safety and security of a particular type of source

  1. Positive phase error from parallel conductance in tetrapolar bio-impedance measurements and its compensation

    Directory of Open Access Journals (Sweden)

    Ivan M Roitt

    2010-01-01

    Full Text Available Bioimpedance measurements are of great use and can provide considerable insight into biological processes.  However, there are a number of possible sources of measurement error that must be considered.  The most dominant source of error is found in bipolar measurements where electrode polarisation effects are superimposed on the true impedance of the sample.  Even with the tetrapolar approach that is commonly used to circumvent this issue, other errors can persist. Here we characterise the positive phase and rise in impedance magnitude with frequency that can result from the presence of any parallel conductive pathways in the measurement set-up.  It is shown that fitting experimental data to an equivalent electrical circuit model allows for accurate determination of the true sample impedance as validated through finite element modelling (FEM of the measurement chamber.  Finally, the model is used to extract dispersion information from cell cultures to characterise their growth.

  2. Progressive significance map and its application to error-resilient image transmission.

    Science.gov (United States)

    Hu, Yang; Pearlman, William A; Li, Xin

    2012-07-01

    Set partition coding (SPC) has shown tremendous success in image compression. Despite its popularity, the lack of error resilience remains a significant challenge to the transmission of images in error-prone environments. In this paper, we propose a novel data representation called the progressive significance map (prog-sig-map) for error-resilient SPC. It structures the significance map (sig-map) into two parts: a high-level summation sig-map and a low-level complementary sig-map (comp-sig-map). Such a structured representation of the sig-map allows us to improve its error-resilient property at the price of only a slight sacrifice in compression efficiency. For example, we have found that a fixed-length coding of the comp-sig-map in the prog-sig-map renders 64% of the coded bitstream insensitive to bit errors, compared with 40% with that of the conventional sig-map. Simulation results have shown that the prog-sig-map can achieve highly competitive rate-distortion performance for binary symmetric channels while maintaining low computational complexity. Moreover, we note that prog-sig-map is complementary to existing independent packetization and channel-coding-based error-resilient approaches and readily lends itself to other source coding applications such as distributed video coding.

  3. Study of systematic errors in the luminosity measurement

    International Nuclear Information System (INIS)

    Arima, Tatsumi

    1993-01-01

    The experimental systematic error in the barrel region was estimated to be 0.44 %. This value is derived considering the systematic uncertainties from the dominant sources but does not include uncertainties which are being studied. In the end cap region, the study of shower behavior and clustering effect is under way in order to determine the angular resolution at the low angle edge of the Liquid Argon Calorimeter. We also expect that the systematic error in this region will be less than 1 %. The technical precision of theoretical uncertainty is better than 0.1 % comparing the Tobimatsu-Shimizu program and BABAMC modified by ALEPH. To estimate the physical uncertainty we will use the ALIBABA [9] which includes O(α 2 ) QED correction in leading-log approximation. (J.P.N.)

  4. Study of systematic errors in the luminosity measurement

    Energy Technology Data Exchange (ETDEWEB)

    Arima, Tatsumi [Tsukuba Univ., Ibaraki (Japan). Inst. of Applied Physics

    1993-04-01

    The experimental systematic error in the barrel region was estimated to be 0.44 %. This value is derived considering the systematic uncertainties from the dominant sources but does not include uncertainties which are being studied. In the end cap region, the study of shower behavior and clustering effect is under way in order to determine the angular resolution at the low angle edge of the Liquid Argon Calorimeter. We also expect that the systematic error in this region will be less than 1 %. The technical precision of theoretical uncertainty is better than 0.1 % comparing the Tobimatsu-Shimizu program and BABAMC modified by ALEPH. To estimate the physical uncertainty we will use the ALIBABA [9] which includes O({alpha}{sup 2}) QED correction in leading-log approximation. (J.P.N.).

  5. Random and Systematic Errors Share in Total Error of Probes for CNC Machine Tools

    Directory of Open Access Journals (Sweden)

    Adam Wozniak

    2018-03-01

    Full Text Available Probes for CNC machine tools, as every measurement device, have accuracy limited by random errors and by systematic errors. Random errors of these probes are described by a parameter called unidirectional repeatability. Manufacturers of probes for CNC machine tools usually specify only this parameter, while parameters describing systematic errors of the probes, such as pre-travel variation or triggering radius variation, are used rarely. Systematic errors of the probes, linked to the differences in pre-travel values for different measurement directions, can be corrected or compensated, but it is not a widely used procedure. In this paper, the share of systematic errors and random errors in total error of exemplary probes are determined. In the case of simple, kinematic probes, systematic errors are much greater than random errors, so compensation would significantly reduce the probing error. Moreover, it shows that in the case of kinematic probes commonly specified unidirectional repeatability is significantly better than 2D performance. However, in the case of more precise strain-gauge probe systematic errors are of the same order as random errors, which means that errors correction or compensation, in this case, would not yield any significant benefits.

  6. A review of setup error in supine breast radiotherapy using cone-beam computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Batumalai, Vikneswary, E-mail: Vikneswary.batumalai@sswahs.nsw.gov.au [South Western Clinical School, University of New South Wales, Sydney, New South Wales (Australia); Liverpool and Macarthur Cancer Therapy Centres, New South Wales (Australia); Ingham Institute of Applied Medical Research, Sydney, New South Wales (Australia); Holloway, Lois [South Western Clinical School, University of New South Wales, Sydney, New South Wales (Australia); Liverpool and Macarthur Cancer Therapy Centres, New South Wales (Australia); Ingham Institute of Applied Medical Research, Sydney, New South Wales (Australia); Centre for Medical Radiation Physics, University of Wollongong, Wollongong, New South Wales (Australia); Institute of Medical Physics, School of Physics, University of Sydney, Sydney, New South Wales (Australia); Delaney, Geoff P. [South Western Clinical School, University of New South Wales, Sydney, New South Wales (Australia); Liverpool and Macarthur Cancer Therapy Centres, New South Wales (Australia); Ingham Institute of Applied Medical Research, Sydney, New South Wales (Australia)

    2016-10-01

    Setup error in breast radiotherapy (RT) measured with 3-dimensional cone-beam computed tomography (CBCT) is becoming more common. The purpose of this study is to review the literature relating to the magnitude of setup error in breast RT measured with CBCT. The different methods of image registration between CBCT and planning computed tomography (CT) scan were also explored. A literature search, not limited by date, was conducted using Medline and Google Scholar with the following key words: breast cancer, RT, setup error, and CBCT. This review includes studies that reported on systematic and random errors, and the methods used when registering CBCT scans with planning CT scan. A total of 11 relevant studies were identified for inclusion in this review. The average magnitude of error is generally less than 5 mm across a number of studies reviewed. The common registration methods used when registering CBCT scans with planning CT scan are based on bony anatomy, soft tissue, and surgical clips. No clear relationships between the setup errors detected and methods of registration were observed from this review. Further studies are needed to assess the benefit of CBCT over electronic portal image, as CBCT remains unproven to be of wide benefit in breast RT.

  7. A review of setup error in supine breast radiotherapy using cone-beam computed tomography

    International Nuclear Information System (INIS)

    Batumalai, Vikneswary; Holloway, Lois; Delaney, Geoff P.

    2016-01-01

    Setup error in breast radiotherapy (RT) measured with 3-dimensional cone-beam computed tomography (CBCT) is becoming more common. The purpose of this study is to review the literature relating to the magnitude of setup error in breast RT measured with CBCT. The different methods of image registration between CBCT and planning computed tomography (CT) scan were also explored. A literature search, not limited by date, was conducted using Medline and Google Scholar with the following key words: breast cancer, RT, setup error, and CBCT. This review includes studies that reported on systematic and random errors, and the methods used when registering CBCT scans with planning CT scan. A total of 11 relevant studies were identified for inclusion in this review. The average magnitude of error is generally less than 5 mm across a number of studies reviewed. The common registration methods used when registering CBCT scans with planning CT scan are based on bony anatomy, soft tissue, and surgical clips. No clear relationships between the setup errors detected and methods of registration were observed from this review. Further studies are needed to assess the benefit of CBCT over electronic portal image, as CBCT remains unproven to be of wide benefit in breast RT.

  8. Systematical and statistical errors in using reference light sources to calibrate TLD readers

    International Nuclear Information System (INIS)

    Burgkhardt, B.; Piesch, E.

    1981-01-01

    Three light sources, namely an NaI(Tl) scintillator + Ra, an NaI(Tl) scintillator + 14 C and a plastic scintillator + 14 C, were used during a period of 24 months for a daily check of two TLD readers: the Harshaw 2000 A + B and the Toledo 651. On the basis of light source measurements long-term changes and day-to-day fluctuations of the reader response were investigated. Systematical changes of the Toledo reader response of up to 6% during a working week are explained by nitrogen effects in the plastic scintillator light source. It was found that the temperature coefficient of the light source intensity was -0.05%/ 0 C for the plastic scintillator and -0.3%/ 0 C for the NaI(Tl) scintillator. The 210 Pb content in the Ra activated NaI(Tl) scintillator caused a time-dependent decrease in light source intensity of 3%/yr for the light source in the Harshaw reader. The internal light sources revealed a relative standard deviation of 0.5% for the Toledo reader and the Harshaw reader after respective reading times of 0.45 and 100 sec. (author)

  9. COMPARATIVE ERROR ANALYSIS IN ENGLISH WRITING BY FIRST, SECOND, AND THIRD YEAR STUDNETS OF ENGLISH DEPARTMENT OF FACULTY OF EDUCATION AT CHAMPASACK UNIVERSITY

    Directory of Open Access Journals (Sweden)

    Nokthavivanh Sychandone

    2016-08-01

    first year learners produced 229 errors or 40, 10% and third year learners made 79 errors or 13, 83%. There are similarity in errors types, five similar categories and five error cases, but there are three different error categories and eighteen error cases. The main error sources, learners had lack knowledge of English grammatical rule. The overgeneralization (265 errors or 46, 40% influences learners’ error, language transfer (199 errors or 34, 85% still interfere learners’ acquisition and simplification (107 errors or 18, 73% is one factor that effect learners’ errors.

  10. Applying the intention-to-treat principle in practice: Guidance on handling randomisation errors.

    Science.gov (United States)

    Yelland, Lisa N; Sullivan, Thomas R; Voysey, Merryn; Lee, Katherine J; Cook, Jonathan A; Forbes, Andrew B

    2015-08-01

    The intention-to-treat principle states that all randomised participants should be analysed in their randomised group. The implications of this principle are widely discussed in relation to the analysis, but have received limited attention in the context of handling errors that occur during the randomisation process. The aims of this article are to (1) demonstrate the potential pitfalls of attempting to correct randomisation errors and (2) provide guidance on handling common randomisation errors when they are discovered that maintains the goals of the intention-to-treat principle. The potential pitfalls of attempting to correct randomisation errors are demonstrated and guidance on handling common errors is provided, using examples from our own experiences. We illustrate the problems that can occur when attempts are made to correct randomisation errors and argue that documenting, rather than correcting these errors, is most consistent with the intention-to-treat principle. When a participant is randomised using incorrect baseline information, we recommend accepting the randomisation but recording the correct baseline data. If ineligible participants are inadvertently randomised, we advocate keeping them in the trial and collecting all relevant data but seeking clinical input to determine their appropriate course of management, unless they can be excluded in an objective and unbiased manner. When multiple randomisations are performed in error for the same participant, we suggest retaining the initial randomisation and either disregarding the second randomisation if only one set of data will be obtained for the participant, or retaining the second randomisation otherwise. When participants are issued the incorrect treatment at the time of randomisation, we propose documenting the treatment received and seeking clinical input regarding the ongoing treatment of the participant. Randomisation errors are almost inevitable and should be reported in trial publications. The

  11. A method for estimating the orientation of a directional sound source from source directivity and multi-microphone recordings: principles and application

    DEFF Research Database (Denmark)

    Guarato, Francesco; Jakobsen, Lasse; Vanderelst, Dieter

    2011-01-01

    Taking into account directivity of real sound sources makes it possible to try solving an interesting and biologically relevant problem: estimating the orientation in three-dimensional space of a directional sound source. The source, of known directivity, produces a broadband signal (in the ultra......Taking into account directivity of real sound sources makes it possible to try solving an interesting and biologically relevant problem: estimating the orientation in three-dimensional space of a directional sound source. The source, of known directivity, produces a broadband signal (in...

  12. The peak efficiency calibration of volume source using 152Eu point source in computer

    International Nuclear Information System (INIS)

    Shen Tingyun; Qian Jianfu; Nan Qinliang; Zhou Yanguo

    1997-01-01

    The author describes the method of the peak efficiency calibration of volume source by means of 152 Eu point source for HPGe γ spectrometer. The peak efficiency can be computed by Monte Carlo simulation, after inputting parameter of detector. The computation results are in agreement with the experimental results with an error of +-3.8%, with an exception one is about +-7.4%

  13. Error quantification of osteometric data in forensic anthropology.

    Science.gov (United States)

    Langley, Natalie R; Meadows Jantz, Lee; McNulty, Shauna; Maijanen, Heli; Ousley, Stephen D; Jantz, Richard L

    2018-04-10

    This study evaluates the reliability of osteometric data commonly used in forensic case analyses, with specific reference to the measurements in Data Collection Procedures 2.0 (DCP 2.0). Four observers took a set of 99 measurements four times on a sample of 50 skeletons (each measurement was taken 200 times by each observer). Two-way mixed ANOVAs and repeated measures ANOVAs with pairwise comparisons were used to examine interobserver (between-subjects) and intraobserver (within-subjects) variability. Relative technical error of measurement (TEM) was calculated for measurements with significant ANOVA results to examine the error among a single observer repeating a measurement multiple times (e.g. repeatability or intraobserver error), as well as the variability between multiple observers (interobserver error). Two general trends emerged from these analyses: (1) maximum lengths and breadths have the lowest error across the board (TEMForensic Skeletal Material, 3rd edition. Each measurement was examined carefully to determine the likely source of the error (e.g. data input, instrumentation, observer's method, or measurement definition). For several measurements (e.g. anterior sacral breadth, distal epiphyseal breadth of the tibia) only one observer differed significantly from the remaining observers, indicating a likely problem with the measurement definition as interpreted by that observer; these definitions were clarified in DCP 2.0 to eliminate this confusion. Other measurements were taken from landmarks that are difficult to locate consistently (e.g. pubis length, ischium length); these measurements were omitted from DCP 2.0. This manual is available for free download online (https://fac.utk.edu/wp-content/uploads/2016/03/DCP20_webversion.pdf), along with an accompanying instructional video (https://www.youtube.com/watch?v=BtkLFl3vim4). Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Japanese Quality Assurance System Regarding the Provision of Material Accounting Reports and the Safeguards Relevant Information to the IAEA

    International Nuclear Information System (INIS)

    Goto, Y.; Namekawa, M.; Kumekawa, H.; Usui, A.; Sano, K.

    2015-01-01

    The provision of the safeguards relevant reports and information in accordance with the comprehensive safeguards agreement (CSA) and the additional protocol (AP) is the basis for the IAEA safeguards. The government of Japan (Japan Safeguards Office, JSGO) has believed that the correct reports contribute to effective and efficient safeguards therefore the domestic quality assurance system for the reporting to the IAEA was already established at the time of the accession of the CSA in 1977. It consists of Code 10 interpretation (including the seminars for operators in Japan), SSAC's checks for syntax error, code and internal consistency (computer based consistency check between facilities) and the discussion with the IAEA on the facilities' measurement system for bulk-handling facilities, which contributes to the more accurate reports from operators. This spirit has been maintained for the entry into force of the AP. For example, questions and amplification from the IAEA will be taken into account the review of the AP declaration before sending to the IAEA and the open source information such as news article and scientific literature in Japanese is collected and translated into English, and the translated information is provided to the IAEA as the supplementary information, which may contribute to broadening the IAEA information source and to their comprehensive evaluation. The other safeguards relevant information, such as the mail-box information for SNRI at LEU fuel fabrication plants, is also checked by the JSGO's QC software before posting. The software was developed by JSGO and it checks data format, batch IDs, birth/death date, shipper/receiver information and material description code. This paper explains the history of the development of the Japanese quality assurance system regarding the reports and the safeguards relevant information to the IAEA. (author)

  15. [The room of errors, a fun learning tool].

    Science.gov (United States)

    Estival, Émilie; Sinoquet, Justine; Cluzel, Franck

    2017-03-01

    Simulation in health care, a source of innovative pedagogical developments, is particularly well-suited to the training of nursing teams. It enables them to acquire or reinforce their knowledge, without any risk for the patients, in a calm and reassuring environment. In psychiatry in particular, the use of a room of errors constitutes a useful learning tool for professionals. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  16. Isolating Graphical Failure-Inducing Input for Privacy Protection in Error Reporting Systems

    Directory of Open Access Journals (Sweden)

    Matos João

    2016-04-01

    Full Text Available This work proposes a new privacy-enhancing system that minimizes the disclosure of information in error reports. Error reporting mechanisms are of the utmost importance to correct software bugs but, unfortunately, the transmission of an error report may reveal users’ private information. Some privacy-enhancing systems for error reporting have been presented in the past years, yet they rely on path condition analysis, which we show in this paper to be ineffective when it comes to graphical-based input. Knowing that numerous applications have graphical user interfaces (GUI, it is very important to overcome such limitation. This work describes a new privacy-enhancing error reporting system, based on a new input minimization algorithm called GUIᴍɪɴ that is geared towards GUI, to remove input that is unnecessary to reproduce the observed failure. Before deciding whether to submit the error report, the user is provided with a step-by-step graphical replay of the minimized input, to evaluate whether it still yields sensitive information. We also provide an open source implementation of the proposed system and evaluate it with well-known applications.

  17. Investigation of Primary Mirror Segment's Residual Errors for the Thirty Meter Telescope

    Science.gov (United States)

    Seo, Byoung-Joon; Nissly, Carl; Angeli, George; MacMynowski, Doug; Sigrist, Norbert; Troy, Mitchell; Williams, Eric

    2009-01-01

    The primary mirror segment aberrations after shape corrections with warping harness have been identified as the single largest error term in the Thirty Meter Telescope (TMT) image quality error budget. In order to better understand the likely errors and how they will impact the telescope performance we have performed detailed simulations. We first generated unwarped primary mirror segment surface shapes that met TMT specifications. Then we used the predicted warping harness influence functions and a Shack-Hartmann wavefront sensor model to determine estimates for the 492 corrected segment surfaces that make up the TMT primary mirror. Surface and control parameters, as well as the number of subapertures were varied to explore the parameter space. The corrected segment shapes were then passed to an optical TMT model built using the Jet Propulsion Laboratory (JPL) developed Modeling and Analysis for Controlled Optical Systems (MACOS) ray-trace simulator. The generated exit pupil wavefront error maps provided RMS wavefront error and image-plane characteristics like the Normalized Point Source Sensitivity (PSSN). The results have been used to optimize the segment shape correction and wavefront sensor designs as well as provide input to the TMT systems engineering error budgets.

  18. Etiology and outcome of inborn errors of metabolism

    International Nuclear Information System (INIS)

    Choudhry, S.; Khan, M.; Khan, E.A.

    2013-01-01

    Objectives: To study the clinical presentation, diagnostic workup and outcome of children presenting with suspected inborn errors of metabolism. Methods: The cross-sectional study was conducted at the Shifa International Hospital, Islamabad, and included all patients diagnosed with the condition between January 2006 and June 2011. Medical records of the patients were reviewed to collect the relevant data. Results: A total of 10 patients underwent diagnostic work-up. Majority 7 (70%) were males and 6 (60%) presented in the neonatal age group. Seizures and coma were the commonest presentations (n=5; 50% each) followed by breathing difficulty (n=4; 40%) and vomiting (n=2; 20%). The commonest diagnoses were methyl malonic acIdaemia (n=2; 20%), non-ketotic hyperglycinaemia (n=7; 10%), fructose 1,6 diphosphatase deficiency (n=1; 10%), and biotinidase deficiency (n=1; 10%). Mortality was high (n=5; 50%) and half of the survivors had severe neurological impairment. Conclusion: The diagnosis of inborn errors of metabolism requires a high index of suspicion. These disorders have a high mortality and risk of long-term neurological disability. (author)

  19. Updating expected action outcome in the medial frontal cortex involves an evaluation of error type.

    Science.gov (United States)

    Maier, Martin E; Steinhauser, Marco

    2013-10-02

    Forming expectations about the outcome of an action is an important prerequisite for action control and reinforcement learning in the human brain. The medial frontal cortex (MFC) has been shown to play an important role in the representation of outcome expectations, particularly when an update of expected outcome becomes necessary because an error is detected. However, error detection alone is not always sufficient to compute expected outcome because errors can occur in various ways and different types of errors may be associated with different outcomes. In the present study, we therefore investigate whether updating expected outcome in the human MFC is based on an evaluation of error type. Our approach was to consider an electrophysiological correlate of MFC activity on errors, the error-related negativity (Ne/ERN), in a task in which two types of errors could occur. Because the two error types were associated with different amounts of monetary loss, updating expected outcomes on error trials required an evaluation of error type. Our data revealed a pattern of Ne/ERN amplitudes that closely mirrored the amount of monetary loss associated with each error type, suggesting that outcome expectations are updated based on an evaluation of error type. We propose that this is achieved by a proactive evaluation process that anticipates error types by continuously monitoring error sources or by dynamically representing possible response-outcome relations.

  20. Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)

    Science.gov (United States)

    Adler, Robert; Gu, Guojun; Huffman, George

    2012-01-01

    A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a

  1. Isotopic abundances relevant to the identification of magma sources

    International Nuclear Information System (INIS)

    O'Nions, R.K.

    1984-01-01

    The behaviour of natural radiogenic isotope tracers in the Earth that have lithophile and atmophile geochemical affinity is reviewed. The isotope tracer signature of oceanic and continental crust may in favourable circumstances by sufficiently distinct from that of the mantle to render a contribution from these sources resolvable within the isotopic composition of the magma. Components derived from the sedimentary and altered basaltic portion of oceanic crust are recognized in some island arc magmas from their Sr, Nd and Pb isotopic signatures. The rare-gas isotope tracers (He, Ar, Xe in particular) are not readily recycled into the mantle and thus provide the basis of an approach that is complementary to that based on the lithophile tracers. In particular, a small mantle-derived helium component may be readily recognized in the presence of a predominant radiogenic component generated in the continents. The importance of assessing the mass balance of these interactions rather than merely a qualitative recognition is emphasized. The question of the relative, contribution of continental-oceanic crust and mantle to magma sources is an essential part of the problem of generation and evolution of continental crust. An approach to this problem through consideration of the isotopic composition of sediments is briefly discussed. (author)

  2. Localization and analysis of error sources for the numerical SIL proof; Lokalisierung und Analyse von Fehlerquellen beim numerischen SIL-Nachweis

    Energy Technology Data Exchange (ETDEWEB)

    Duepont, D.; Litz, L. [Technische Univ. Kaiserslautern (Germany). Lehrstuhl fuer Automatisierungstechnik; Netter, P. [Infraserv GmbH und Co. Hoechst KG, Frankfurt am Main (Germany)

    2008-07-01

    According to the standard IEC 61511 each safety-related loop is assigned to one of the four Safety Integrity Levels (SILs). For every safety-related loop a SIL-specific Probability of Failure on Demand (PFD) must be proven. Usually, the PFD calculation is performed based upon the failure rates of each loop component aided by commercial software tools. However, this bottom-up approach suffers from many uncertainties. Especially, a lack of reliable failure rate data causes many problems. Reference data collected in different environments are available to solve this situation. However, this pragmatism leads to a PFD bandwidth, not to a single PFD value as desired. In order to make a decision for a numerical value appropriate for the chemical and pharmaceutical process industry a data ascertainment has been initiated by the European NAMUR. Its results display large deficiencies for the bottom-up approach. The error sources leading to this situation are located and analyzed. (orig.)

  3. On the effect of numerical errors in large eddy simulations of turbulent flows

    International Nuclear Information System (INIS)

    Kravchenko, A.G.; Moin, P.

    1997-01-01

    Aliased and dealiased numerical simulations of a turbulent channel flow are performed using spectral and finite difference methods. Analytical and numerical studies show that aliasing errors are more destructive for spectral and high-order finite-difference calculations than for low-order finite-difference simulations. Numerical errors have different effects for different forms of the nonlinear terms in the Navier-Stokes equations. For divergence and convective forms, spectral methods are energy-conserving only if dealiasing is performed. For skew-symmetric and rotational forms, both spectral and finite-difference methods are energy-conserving even in the presence of aliasing errors. It is shown that discrepancies between the results of dealiased spectral and standard nondialiased finite-difference methods are due to both aliasing and truncation errors with the latter being the leading source of differences. The relative importance of aliasing and truncation errors as compared to subgrid scale model terms in large eddy simulations is analyzed and discussed. For low-order finite-difference simulations, truncation errors can exceed the magnitude of the subgrid scale term. 25 refs., 17 figs., 1 tab

  4. Learning from diagnostic errors: A good way to improve education in radiology

    Energy Technology Data Exchange (ETDEWEB)

    Pinto, Antonio, E-mail: antopin1968@libero.it [Department of Diagnostic Imaging, A. Cardarelli Hospital, I-80131 Naples (Italy); Acampora, Ciro, E-mail: itrasente@libero.it [Department of Diagnostic Imaging, A. Cardarelli Hospital, I-80131 Naples (Italy); Pinto, Fabio, E-mail: fpinto1966@libero.it [Department of Diagnostic Imaging, A. Cardarelli Hospital, I-80131 Naples (Italy); Kourdioukova, Elena, E-mail: Elena.Kourdioukova@UGent.be [Department of Radiology, Ghent University Hospital (UZG), MR/-1K12, De Pintelaan 185, B-9000 Ghent (Belgium); Romano, Luigia, E-mail: luigia.romano@fastwebnet.it [Department of Diagnostic Imaging, A. Cardarelli Hospital, I-80131 Naples (Italy); Verstraete, Koenraad, E-mail: Koenraad.Verstraete@UGent.be [Department of Radiology, Ghent University Hospital (UZG), MR/-1K12, De Pintelaan 185, B-9000 Ghent (Belgium)

    2011-06-15

    Purpose: To evaluate the causes and the main categories of diagnostic errors in radiology as a method for improving education in radiology. Material and methods: A Medline search was performed using PubMed (National Library of Medicine, Bethesda, MD) for original research publications discussing errors in diagnosis with specific reference to radiology. The search strategy employed different combinations of the following terms: (1) diagnostic radiology, (2) radiological error and (3) medical negligence. This review was limited to human studies and to English-language literature. Two authors reviewed all the titles and subsequently the abstracts of 491 articles that appeared pertinent. Additional articles were identified by reviewing the reference lists of relevant papers. Finally, the full text of 75 selected articles was reviewed. Results: Several studies show that the etiology of radiological error is multi-factorial. The main category of claims against radiologists includes the misdiagnoses. Radiologic 'misses' typically are one of two types: either missed fractures or missed diagnosis of cancer. The most commonly missed fractures include those in the femur, the navicular bone, and the cervical spine. The second type of 'miss' is failure to diagnose cancer. Lack of appreciation of lung nodules on chest radiographs and breast lesions on mammograms are the predominant problems. Conclusion: Diagnostic errors should be considered not as signs of failure, but as learning opportunities.

  5. Learning from diagnostic errors: A good way to improve education in radiology

    International Nuclear Information System (INIS)

    Pinto, Antonio; Acampora, Ciro; Pinto, Fabio; Kourdioukova, Elena; Romano, Luigia; Verstraete, Koenraad

    2011-01-01

    Purpose: To evaluate the causes and the main categories of diagnostic errors in radiology as a method for improving education in radiology. Material and methods: A Medline search was performed using PubMed (National Library of Medicine, Bethesda, MD) for original research publications discussing errors in diagnosis with specific reference to radiology. The search strategy employed different combinations of the following terms: (1) diagnostic radiology, (2) radiological error and (3) medical negligence. This review was limited to human studies and to English-language literature. Two authors reviewed all the titles and subsequently the abstracts of 491 articles that appeared pertinent. Additional articles were identified by reviewing the reference lists of relevant papers. Finally, the full text of 75 selected articles was reviewed. Results: Several studies show that the etiology of radiological error is multi-factorial. The main category of claims against radiologists includes the misdiagnoses. Radiologic 'misses' typically are one of two types: either missed fractures or missed diagnosis of cancer. The most commonly missed fractures include those in the femur, the navicular bone, and the cervical spine. The second type of 'miss' is failure to diagnose cancer. Lack of appreciation of lung nodules on chest radiographs and breast lesions on mammograms are the predominant problems. Conclusion: Diagnostic errors should be considered not as signs of failure, but as learning opportunities.

  6. What we can learn from naming errors of children with language impairment at preschool age.

    Science.gov (United States)

    Biran, Michal; Novogrodsky, Rama; Harel-Nov, Efrat; Gil, Mali; Mimouni-Bloch, Aviva

    2018-01-01

    Naming is a complex, multi-level process. It is composed of distinct semantic and phonological levels. Children with naming deficits produce different error types when failing to retrieve the target word. This study explored the error characteristics of children with language impairment compared to those with typical language development. 46 preschool children were tested on a naming test: 16 with language impairment and a naming deficit and 30 with typical language development. The analysis compared types of error in both groups. In a group level, children with language impairment produced different error patterns compared to the control group. Based on naming error analysis and performance on other language tests, two case studies of contrasting profiles suggest different sources for lexical retrieval difficulties in children. The findings reveal differences between the two groups in naming scores and naming errors, and support a qualitative impairment in early development of children with naming deficits. The differing profiles of naming deficits emphasise the importance of including error analysis in the diagnosis.

  7. Uncorrected refractive errors.

    Science.gov (United States)

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  8. Uncorrected refractive errors

    Directory of Open Access Journals (Sweden)

    Kovin S Naidoo

    2012-01-01

    Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.

  9. Factory-discharged pharmaceuticals could be a relevant source of aquatic environment contamination: review of evidence and need for knowledge.

    Science.gov (United States)

    Cardoso, Olivier; Porcher, Jean-Marc; Sanchez, Wilfried

    2014-11-01

    Human and veterinary active pharmaceutical ingredients (APIs) are involved in contamination of surface water, ground water, effluents, sediments and biota. Effluents of waste water treatment plants and hospitals are considered as major sources of such contamination. However, recent evidences reveal high concentrations of a large number of APIs in effluents from pharmaceutical factories and in receiving aquatic ecosystems. Moreover, laboratory exposures to these effluents and field experiments reveal various physiological disturbances in exposed aquatic organisms. Also, it seems to be relevant to increase knowledge on this route of contamination but also to develop specific approaches for further environmental monitoring campaigns. The present study summarizes available data related to the impact of pharmaceutical factory discharges on aquatic ecosystem contaminations and presents associated challenges for scientists and environmental managers. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Increasing safety of a robotic system for inner ear surgery using probabilistic error modeling near vital anatomy

    Science.gov (United States)

    Dillon, Neal P.; Siebold, Michael A.; Mitchell, Jason E.; Blachon, Gregoire S.; Balachandran, Ramya; Fitzpatrick, J. Michael; Webster, Robert J.

    2016-03-01

    Safe and effective planning for robotic surgery that involves cutting or ablation of tissue must consider all potential sources of error when determining how close the tool may come to vital anatomy. A pre-operative plan that does not adequately consider potential deviations from ideal system behavior may lead to patient injury. Conversely, a plan that is overly conservative may result in ineffective or incomplete performance of the task. Thus, enforcing simple, uniform-thickness safety margins around vital anatomy is insufficient in the presence of spatially varying, anisotropic error. Prior work has used registration error to determine a variable-thickness safety margin around vital structures that must be approached during mastoidectomy but ultimately preserved. In this paper, these methods are extended to incorporate image distortion and physical robot errors, including kinematic errors and deflections of the robot. These additional sources of error are discussed and stochastic models for a bone-attached robot for otologic surgery are developed. An algorithm for generating appropriate safety margins based on a desired probability of preserving the underlying anatomical structure is presented. Simulations are performed on a CT scan of a cadaver head and safety margins are calculated around several critical structures for planning of a robotic mastoidectomy.

  11. Detected-jump-error-correcting quantum codes, quantum error designs, and quantum computation

    International Nuclear Information System (INIS)

    Alber, G.; Mussinger, M.; Beth, Th.; Charnes, Ch.; Delgado, A.; Grassl, M.

    2003-01-01

    The recently introduced detected-jump-correcting quantum codes are capable of stabilizing qubit systems against spontaneous decay processes arising from couplings to statistically independent reservoirs. These embedded quantum codes exploit classical information about which qubit has emitted spontaneously and correspond to an active error-correcting code embedded in a passive error-correcting code. The construction of a family of one-detected-jump-error-correcting quantum codes is shown and the optimal redundancy, encoding, and recovery as well as general properties of detected-jump-error-correcting quantum codes are discussed. By the use of design theory, multiple-jump-error-correcting quantum codes can be constructed. The performance of one-jump-error-correcting quantum codes under nonideal conditions is studied numerically by simulating a quantum memory and Grover's algorithm

  12. Separation of zeros for source signature identification under reverberant path conditions.

    Science.gov (United States)

    Hasegawa, Tomomi; Tohyama, Mikio

    2011-10-01

    This paper presents an approach to distinguishing the zeros representing a sound source from those representing the transfer function on the basis of Lyon's residue-sign model. In machinery noise diagnostics, the source signature must be separated from observation records under reverberant path conditions. In numerical examples and an experimental piano-string vibration analysis, the modal responses could be synthesized by using clustered line-spectrum modeling. The modeling error represented the source signature subject to the source characteristics being given by a finite impulse response. The modeling error can be interpreted as a remainder function necessary for the zeros representing the source signature. © 2011 Acoustical Society of America

  13. A manual to identify sources of fluvial sediment

    Science.gov (United States)

    Gellis, Allen C.; Fitzpatrick, Faith A.; Schubauer-Berigan, Joseph

    2016-01-01

    Sediment is an important pollutant of concern that can degrade and alter aquatic habitat. A sediment budget is an accounting of the sources, storage, and export of sediment over a defined spatial and temporal scale. This manual focuses on field approaches to estimate a sediment budget. We also highlight the sediment fingerprinting approach to attribute sediment to different watershed sources. Determining the sources and sinks of sediment is important in developing strategies to reduce sediment loads to water bodies impaired by sediment. Therefore, this manual can be used when developing a sediment TMDL requiring identification of sediment sources.The manual takes the user through the seven necessary steps to construct a sediment budget:Decision-making for watershed scale and time period of interestFamiliarization with the watershed by conducting a literature review, compiling background information and maps relevant to study questions, conducting a reconnaissance of the watershedDeveloping partnerships with landowners and jurisdictionsCharacterization of watershed geomorphic settingDevelopment of a sediment budget designData collectionInterpretation and construction of the sediment budgetGenerating products (maps, reports, and presentations) to communicate findings.Sediment budget construction begins with examining the question(s) being asked and whether a sediment budget is necessary to answer these question(s). If undertaking a sediment budget analysis is a viable option, the next step is to define the spatial scale of the watershed and the time scale needed to answer the question(s). Of course, we understand that monetary constraints play a big role in any decision.Early in the sediment budget development process, we suggest getting to know your watershed by conducting a reconnaissance and meeting with local stakeholders. The reconnaissance aids in understanding the geomorphic setting of the watershed and potential sources of sediment. Identifying the potential

  14. Varying coefficients model with measurement error.

    Science.gov (United States)

    Li, Liang; Greene, Tom

    2008-06-01

    We propose a semiparametric partially varying coefficient model to study the relationship between serum creatinine concentration and the glomerular filtration rate (GFR) among kidney donors and patients with chronic kidney disease. A regression model is used to relate serum creatinine to GFR and demographic factors in which coefficient of GFR is expressed as a function of age to allow its effect to be age dependent. GFR measurements obtained from the clearance of a radioactively labeled isotope are assumed to be a surrogate for the true GFR, with the relationship between measured and true GFR expressed using an additive error model. We use locally corrected score equations to estimate parameters and coefficient functions, and propose an expected generalized cross-validation (EGCV) method to select the kernel bandwidth. The performance of the proposed methods, which avoid distributional assumptions on the true GFR and residuals, is investigated by simulation. Accounting for measurement error using the proposed model reduced apparent inconsistencies in the relationship between serum creatinine and GFR among different clinical data sets derived from kidney donor and chronic kidney disease source populations.

  15. A framework to assess diagnosis error probabilities in the advanced MCR

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ar Ryum; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of); Kim, Jong Hyun [Chosun University, Gwangju (Korea, Republic of); Jang, Inseok; Park, Jinkyun [Korea Atomic Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    The Institute of Nuclear Power Operations (INPO)’s operating experience database revealed that about 48% of the total events in world NPPs for 2 years (2010-2011) happened due to human errors. The purposes of human reliability analysis (HRA) method are to evaluate the potential for, and mechanism of, human errors that may affect plant safety. Accordingly, various HRA methods have been developed such as technique for human error rate prediction (THERP), simplified plant analysis risk human reliability assessment (SPAR-H), cognitive reliability and error analysis method (CREAM) and so on. Many researchers have asserted that procedure, alarm, and display are critical factors to affect operators’ generic activities, especially for diagnosis activities. None of various HRA methods was explicitly designed to deal with digital systems. SCHEME (Soft Control Human error Evaluation MEthod) considers only for the probability of soft control execution error in the advanced MCR. The necessity of developing HRA methods in various conditions of NPPs has been raised. In this research, the framework to estimate diagnosis error probabilities in the advanced MCR was suggested. The assessment framework was suggested by three steps. The first step is to investigate diagnosis errors and calculate their probabilities. The second step is to quantitatively estimate PSFs’ weightings in the advanced MCR. The third step is to suggest the updated TRC model to assess the nominal diagnosis error probabilities. Additionally, the proposed framework was applied by using the full-scope simulation. Experiments conducted in domestic full-scope simulator and HAMMLAB were used as data-source. Total eighteen tasks were analyzed and twenty-three crews participated in.

  16. Community Pharmacists' Perception of the Relevance of Drug ...

    African Journals Online (AJOL)

    HP

    Community Pharmacists' Perception of the Relevance of. Drug Package Insert as Source of Drug Information in. Southwestern Nigeria. Kenechuckwu Diobi, Titilayo O Fakeye* and Rasaq Adisa. Department of Clinical Pharmacy & Pharmacy Administration, Faculty of Pharmacy, University of Ibadan, Ibadan, Nigeria.

  17. Perceptual learning eases crowding by reducing recognition errors but not position errors.

    Science.gov (United States)

    Xiong, Ying-Zi; Yu, Cong; Zhang, Jun-Yun

    2015-08-01

    When an observer reports a letter flanked by additional letters in the visual periphery, the response errors (the crowding effect) may result from failure to recognize the target letter (recognition errors), from mislocating a correctly recognized target letter at a flanker location (target misplacement errors), or from reporting a flanker as the target letter (flanker substitution errors). Crowding can be reduced through perceptual learning. However, it is not known how perceptual learning operates to reduce crowding. In this study we trained observers with a partial-report task (Experiment 1), in which they reported the central target letter of a three-letter string presented in the visual periphery, or a whole-report task (Experiment 2), in which they reported all three letters in order. We then assessed the impact of training on recognition of both unflanked and flanked targets, with particular attention to how perceptual learning affected the types of errors. Our results show that training improved target recognition but not single-letter recognition, indicating that training indeed affected crowding. However, training did not reduce target misplacement errors or flanker substitution errors. This dissociation between target recognition and flanker substitution errors supports the view that flanker substitution may be more likely a by-product (due to response bias), rather than a cause, of crowding. Moreover, the dissociation is not consistent with hypothesized mechanisms of crowding that would predict reduced positional errors.

  18. Error suppression and error correction in adiabatic quantum computation: non-equilibrium dynamics

    International Nuclear Information System (INIS)

    Sarovar, Mohan; Young, Kevin C

    2013-01-01

    While adiabatic quantum computing (AQC) has some robustness to noise and decoherence, it is widely believed that encoding, error suppression and error correction will be required to scale AQC to large problem sizes. Previous works have established at least two different techniques for error suppression in AQC. In this paper we derive a model for describing the dynamics of encoded AQC and show that previous constructions for error suppression can be unified with this dynamical model. In addition, the model clarifies the mechanisms of error suppression and allows the identification of its weaknesses. In the second half of the paper, we utilize our description of non-equilibrium dynamics in encoded AQC to construct methods for error correction in AQC by cooling local degrees of freedom (qubits). While this is shown to be possible in principle, we also identify the key challenge to this approach: the requirement of high-weight Hamiltonians. Finally, we use our dynamical model to perform a simplified thermal stability analysis of concatenated-stabilizer-code encoded many-body systems for AQC or quantum memories. This work is a companion paper to ‘Error suppression and error correction in adiabatic quantum computation: techniques and challenges (2013 Phys. Rev. X 3 041013)’, which provides a quantum information perspective on the techniques and limitations of error suppression and correction in AQC. In this paper we couch the same results within a dynamical framework, which allows for a detailed analysis of the non-equilibrium dynamics of error suppression and correction in encoded AQC. (paper)

  19. Nuclear power plant personnel errors in decision-making as an object of probabilistic risk assessment

    International Nuclear Information System (INIS)

    Reer, B.

    1993-09-01

    The integration of human error - also called man-machine system analysis (MMSA) - is an essential part of probabilistic risk assessment (PRA). A new method is presented which allows for a systematic and comprehensive PRA inclusions of decision-based errors due to conflicts or similarities. For the error identification procedure, new question techniques are developed. These errors are shown to be identified by looking at retroactions caused by subordinate goals as components of the overall safety relevant goal. New quantification methods for estimating situation-specific probabilities are developed. The factors conflict and similarity are operationalized in a way that allows their quantification based on informations which are usually available in PRA. The quantification procedure uses extrapolations and interpolations based on a poor set of data related to decision-based errors. Moreover, for passive errors in decision-making a completely new approach is presented where errors are quantified via a delay initiating the required action rather than via error probabilities. The practicability of this dynamic approach is demonstrated by a probabilistic analysis of the actions required during the total loss of feedwater event at the Davis-Besse plant 1985. The extensions of the ''classical'' PRA method developed in this work are applied to a MMSA of the decay heat removal (DHR) of the ''HTR-500''. Errors in decision-making - as potential roots of extraneous acts - are taken into account in a comprehensive and systematic manner. Five additional errors are identified. However, the probabilistic quantification results a nonsignificant increase of the DHR failure probability. (orig.) [de

  20. Calculation of magnetic error fields in hybrid insertion devices

    International Nuclear Information System (INIS)

    Savoy, R.; Halbach, K.; Hassenzahl, W.; Hoyer, E.; Humphries, D.; Kincaid, B.

    1989-08-01

    The Advanced Light Source (ALS) at the Lawrence Berkeley Laboratory requires insertion devices with fields sufficiently accurate to take advantage of the small emittance of the ALS electron beam. To maintain the spectral performance of the synchrotron radiation and to limit steering effects on the electron beam these errors must be smaller than 0.25%. This paper develops a procedure for calculating the steering error due to misalignment of the easy axis of the permanent magnet material. The procedure is based on a three dimensional theory of the design of hybrid insertion devices developed by one of us. The acceptable tolerance for easy axis misalignment is found for a 5 cm period undulator proposed for the ALS. 11 refs., 5 figs

  1. Influence of video compression on the measurement error of the television system

    Science.gov (United States)

    Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.

    2015-05-01

    Video data require a very large memory capacity. Optimal ratio quality / volume video encoding method is one of the most actual problem due to the urgent need to transfer large amounts of video over various networks. The technology of digital TV signal compression reduces the amount of data used for video stream representation. Video compression allows effective reduce the stream required for transmission and storage. It is important to take into account the uncertainties caused by compression of the video signal in the case of television measuring systems using. There are a lot digital compression methods. The aim of proposed work is research of video compression influence on the measurement error in television systems. Measurement error of the object parameter is the main characteristic of television measuring systems. Accuracy characterizes the difference between the measured value abd the actual parameter value. Errors caused by the optical system can be selected as a source of error in the television systems measurements. Method of the received video signal processing is also a source of error. Presence of error leads to large distortions in case of compression with constant data stream rate. Presence of errors increases the amount of data required to transmit or record an image frame in case of constant quality. The purpose of the intra-coding is reducing of the spatial redundancy within a frame (or field) of television image. This redundancy caused by the strong correlation between the elements of the image. It is possible to convert an array of image samples into a matrix of coefficients that are not correlated with each other, if one can find corresponding orthogonal transformation. It is possible to apply entropy coding to these uncorrelated coefficients and achieve a reduction in the digital stream. One can select such transformation that most of the matrix coefficients will be almost zero for typical images . Excluding these zero coefficients also

  2. Optical pattern recognition architecture implementing the mean-square error correlation algorithm

    Science.gov (United States)

    Molley, Perry A.

    1991-01-01

    An optical architecture implementing the mean-square error correlation algorithm, MSE=.SIGMA.[I-R].sup.2 for discriminating the presence of a reference image R in an input image scene I by computing the mean-square-error between a time-varying reference image signal s.sub.1 (t) and a time-varying input image signal s.sub.2 (t) includes a laser diode light source which is temporally modulated by a double-sideband suppressed-carrier source modulation signal I.sub.1 (t) having the form I.sub.1 (t)=A.sub.1 [1+.sqroot.2m.sub.1 s.sub.1 (t)cos (2.pi.f.sub.o t)] and the modulated light output from the laser diode source is diffracted by an acousto-optic deflector. The resultant intensity of the +1 diffracted order from the acousto-optic device is given by: I.sub.2 (t)=A.sub.2 [+2m.sub.2.sup.2 s.sub.2.sup.2 (t)-2.sqroot.2m.sub.2 (t) cos (2.pi.f.sub.o t] The time integration of the two signals I.sub.1 (t) and I.sub.2 (t) on the CCD deflector plane produces the result R(.tau.) of the mean-square error having the form: R(.tau.)=A.sub.1 A.sub.2 {[T]+[2m.sub.2.sup.2.multidot..intg.s.sub.2.sup.2 (t-.tau.)dt]-[2m.sub.1 m.sub.2 cos (2.tau.f.sub.o .tau.).multidot..intg.s.sub.1 (t)s.sub.2 (t-.tau.)dt]} where: s.sub.1 (t) is the signal input to the diode modulation source: s.sub.2 (t) is the signal input to the AOD modulation source; A.sub.1 is the light intensity; A.sub.2 is the diffraction efficiency; m.sub.1 and m.sub.2 are constants that determine the signal-to-bias ratio; f.sub.o is the frequency offset between the oscillator at f.sub.c and the modulation at f.sub.c +f.sub.o ; and a.sub.o and a.sub.1 are constant chosen to bias the diode source and the acousto-optic deflector into their respective linear operating regions so that the diode source exhibits a linear intensity characteristic and the AOD exhibits a linear amplitude characteristic.

  3. Sourcing of an alternative pericyte-like cell type from peripheral blood in clinically relevant numbers for therapeutic angiogenic applications.

    Science.gov (United States)

    Blocki, Anna; Wang, Yingting; Koch, Maria; Goralczyk, Anna; Beyer, Sebastian; Agarwal, Nikita; Lee, Michelle; Moonshi, Shehzahdi; Dewavrin, Jean-Yves; Peh, Priscilla; Schwarz, Herbert; Bhakoo, Kishore; Raghunath, Michael

    2015-03-01

    Autologous cells hold great potential for personalized cell therapy, reducing immunological and risk of infections. However, low cell counts at harvest with subsequently long expansion times with associated cell function loss currently impede the advancement of autologous cell therapy approaches. Here, we aimed to source clinically relevant numbers of proangiogenic cells from an easy accessible cell source, namely peripheral blood. Using macromolecular crowding (MMC) as a biotechnological platform, we derived a novel cell type from peripheral blood that is generated within 5 days in large numbers (10-40 million cells per 100 ml of blood). This blood-derived angiogenic cell (BDAC) type is of monocytic origin, but exhibits pericyte markers PDGFR-β and NG2 and demonstrates strong angiogenic activity, hitherto ascribed only to MSC-like pericytes. Our findings suggest that BDACs represent an alternative pericyte-like cell population of hematopoietic origin that is involved in promoting early stages of microvasculature formation. As a proof of principle of BDAC efficacy in an ischemic disease model, BDAC injection rescued affected tissues in a murine hind limb ischemia model by accelerating and enhancing revascularization. Derived from a renewable tissue that is easy to collect, BDACs overcome current short-comings of autologous cell therapy, in particular for tissue repair strategies.

  4. Analysis of positron annihilation lifetime data by numerical Laplace inversion: Corrections for source terms and zero-time shift errors

    International Nuclear Information System (INIS)

    Gregory, R.B.

    1991-01-01

    We have recently described modifications to the program CONTIN for the solution of Fredholm integral equations with convoluted kernels of the type that occur in the analysis of positron annihilation lifetime data. In this article, modifications to the program to correct for source terms in the sample and reference decay curves and for shifts in the position of the zero-time channel of the sample and reference data are described. Unwanted source components, expressed as a discrete sum of exponentials, may be removed from both the sample and reference data by modification of the sample data alone, without the need for direct knowledge of the instrument resolution function. Shifts in the position of the zero-time channel of up to half the channel width of the multichannel analyzer can be corrected. Analyses of computer-simulated test data indicate that the quality of the reconstructed annihilation rate probability density functions is improved by employing a refernce material with a short lifetime and indicate that reference materials which generate free positrons by quenching positronium formation (i.e. strong oxidizing agents) have lifetimes that are too long (400-450 ps) to provide reliable estimates of the lifetime parameters for the shortlived components with the methods described here. Well-annealed single crystals of metals with lifetimes less than 200 ps, such as molybdenum (123 ps) and aluminium (166 ps) do not introduce significant errors in estimates of the lifetime parameters and are to be preferred as reference materials. The performance of our modified version of CONTIN is illustrated by application to positron annihilation in polytetrafluoroethylene. (orig.)

  5. Why relevance theory is relevant for lexicography

    DEFF Research Database (Denmark)

    Bothma, Theo; Tarp, Sven

    2014-01-01

    This article starts by providing a brief summary of relevance theory in information science in relation to the function theory of lexicography, explaining the different types of relevance, viz. objective system relevance and the subjective types of relevance, i.e. topical, cognitive, situational...... that is very important for lexicography as well as for information science, viz. functional relevance. Since all lexicographic work is ultimately aimed at satisfying users’ information needs, the article then discusses why the lexicographer should take note of all these types of relevance when planning a new...... dictionary project, identifying new tasks and responsibilities of the modern lexicographer. The article furthermore discusses how relevance theory impacts on teaching dictionary culture and reference skills. By integrating insights from lexicography and information science, the article contributes to new...

  6. How Source Information Shapes Lay Interpretations of Science Conflicts: Interplay between Sourcing, Conflict Explanation, Source Evaluation, and Claim Evaluation

    Science.gov (United States)

    Thomm, Eva; Bromme, Rainer

    2016-01-01

    When laypeople read controversial scientific information in order to make a personally relevant decision, information on the source is a valuable resource with which to evaluate multiple, competing claims. Due to their bounded understanding, laypeople rely on the expertise of others and need to identify whether sources are credible. The present…

  7. Simulation-aided investigation of beam hardening induced errors in CT dimensional metrology

    DEFF Research Database (Denmark)

    Tan, Ye; Kiekens, Kim; Welkenhuyzen, Frank

    2013-01-01

    most of these factors are mutually correlated, it remains challenging to interpret measurement results and to identify the distinct error sources. Since simulations allow isolating the different affecting factors, they form a useful complement to experimental investigations. Dewulf et.al [5...

  8. Team errors: definition and taxonomy

    International Nuclear Information System (INIS)

    Sasou, Kunihide; Reason, James

    1999-01-01

    In error analysis or error management, the focus is usually upon individuals who have made errors. In large complex systems, however, most people work in teams or groups. Considering this working environment, insufficient emphasis has been given to 'team errors'. This paper discusses the definition of team errors and its taxonomy. These notions are also applied to events that have occurred in the nuclear power industry, aviation industry and shipping industry. The paper also discusses the relations between team errors and Performance Shaping Factors (PSFs). As a result, the proposed definition and taxonomy are found to be useful in categorizing team errors. The analysis also reveals that deficiencies in communication, resource/task management, excessive authority gradient, excessive professional courtesy will cause team errors. Handling human errors as team errors provides an opportunity to reduce human errors

  9. The role of errors in the measurements performed at the reprocessing plant head-end for material accountancy purposes

    International Nuclear Information System (INIS)

    Foggi, C.; Liebetrau, A.M.; Petraglia, E.

    1999-01-01

    One of the most common procedures used in determining the amount of nuclear material contained in solutions consists of first measuring the volume and the density of the solution, and then determining the concentrations of this material. This presentation will focus on errors generated at the process lime in the measurement of volume and density. These errors and their associated uncertainties can be grouped into distinct categories depending on their origin: those attributable to measuring instruments; those attributable to operational procedures; variability in measurement conditions; errors in the analysis and interpretation of results. Possible errors sources, their relative magnitudes, and an error propagation rationale are discussed, with emphasis placed on bases and errors of the last three types called systematic errors [ru

  10. Subdivision Error Analysis and Compensation for Photoelectric Angle Encoder in a Telescope Control System

    Directory of Open Access Journals (Sweden)

    Yanrui Su

    2015-01-01

    Full Text Available As the position sensor, photoelectric angle encoder affects the accuracy and stability of telescope control system (TCS. A TCS-based subdivision error compensation method for encoder is proposed. Six types of subdivision error sources are extracted through mathematical expressions of subdivision signals first. Then the period length relationships between subdivision signals and subdivision errors are deduced. And the error compensation algorithm only utilizing the shaft position of TCS is put forward, along with two control models; Model I is that the algorithm applies only to the speed loop of TCS and Model II is applied to both speed loop and position loop. Combined with actual project, elevation jittering phenomenon of the telescope is discussed to decide the necessity of DC-type subdivision error compensation. Low-speed elevation performance before and after error compensation is compared to help decide that Model II is preferred. In contrast to original performance, the maximum position error of the elevation with DC subdivision error compensation is reduced by approximately 47.9% from 1.42″ to 0.74″. The elevation gets a huge decrease in jitters. This method can compensate the encoder subdivision errors effectively and improve the stability of TCS.

  11. The Errors of Our Ways: Understanding Error Representations in Cerebellar-Dependent Motor Learning.

    Science.gov (United States)

    Popa, Laurentiu S; Streng, Martha L; Hewitt, Angela L; Ebner, Timothy J

    2016-04-01

    The cerebellum is essential for error-driven motor learning and is strongly implicated in detecting and correcting for motor errors. Therefore, elucidating how motor errors are represented in the cerebellum is essential in understanding cerebellar function, in general, and its role in motor learning, in particular. This review examines how motor errors are encoded in the cerebellar cortex in the context of a forward internal model that generates predictions about the upcoming movement and drives learning and adaptation. In this framework, sensory prediction errors, defined as the discrepancy between the predicted consequences of motor commands and the sensory feedback, are crucial for both on-line movement control and motor learning. While many studies support the dominant view that motor errors are encoded in the complex spike discharge of Purkinje cells, others have failed to relate complex spike activity with errors. Given these limitations, we review recent findings in the monkey showing that complex spike modulation is not necessarily required for motor learning or for simple spike adaptation. Also, new results demonstrate that the simple spike discharge provides continuous error signals that both lead and lag the actual movements in time, suggesting errors are encoded as both an internal prediction of motor commands and the actual sensory feedback. These dual error representations have opposing effects on simple spike discharge, consistent with the signals needed to generate sensory prediction errors used to update a forward internal model.

  12. Video Error Correction Using Steganography

    Science.gov (United States)

    Robie, David L.; Mersereau, Russell M.

    2002-12-01

    The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  13. Dissociation of item and source memory in rhesus monkeys.

    Science.gov (United States)

    Basile, Benjamin M; Hampton, Robert R

    2017-09-01

    Source memory, or memory for the context in which a memory was formed, is a defining characteristic of human episodic memory and source memory errors are a debilitating symptom of memory dysfunction. Evidence for source memory in nonhuman primates is sparse despite considerable evidence for other types of sophisticated memory and the practical need for good models of episodic memory in nonhuman primates. A previous study showed that rhesus monkeys confused the identity of a monkey they saw with a monkey they heard, but only after an extended memory delay. This suggests that they initially remembered the source - visual or auditory - of the information but forgot the source as time passed. Here, we present a monkey model of source memory that is based on this previous study. In each trial, monkeys studied two images, one that they simply viewed and touched and the other that they classified as a bird, fish, flower, or person. In a subsequent memory test, they were required to select the image from one source but avoid the other. With training, monkeys learned to suppress responding to images from the to-be-avoided source. After longer memory intervals, monkeys continued to show reliable item memory, discriminating studied images from distractors, but made many source memory errors. Monkeys discriminated source based on study method, not study order, providing preliminary evidence that our manipulation of retention interval caused errors due to source forgetting instead of source confusion. Finally, some monkeys learned to select remembered images from either source on cue, showing that they did indeed remember both items and both sources. This paradigm potentially provides a new model to study a critical aspect of episodic memory in nonhuman primates. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Coping with medical error: a systematic review of papers to assess the effects of involvement in medical errors on healthcare professionals' psychological well-being.

    Science.gov (United States)

    Sirriyeh, Reema; Lawton, Rebecca; Gardner, Peter; Armitage, Gerry

    2010-12-01

    Previous research has established health professionals as secondary victims of medical error, with the identification of a range of emotional and psychological repercussions that may occur as a result of involvement in error.2 3 Due to the vast range of emotional and psychological outcomes, research to date has been inconsistent in the variables measured and tools used. Therefore, differing conclusions have been drawn as to the nature of the impact of error on professionals and the subsequent repercussions for their team, patients and healthcare institution. A systematic review was conducted. Data sources were identified using database searches, with additional reference and hand searching. Eligibility criteria were applied to all studies identified, resulting in a total of 24 included studies. Quality assessment was conducted with the included studies using a tool that was developed as part of this research, but due to the limited number and diverse nature of studies, no exclusions were made on this basis. Review findings suggest that there is consistent evidence for the widespread impact of medical error on health professionals. Psychological repercussions may include negative states such as shame, self-doubt, anxiety and guilt. Despite much attention devoted to the assessment of negative outcomes, the potential for positive outcomes resulting from error also became apparent, with increased assertiveness, confidence and improved colleague relationships reported. It is evident that involvement in a medical error can elicit a significant psychological response from the health professional involved. However, a lack of literature around coping and support, coupled with inconsistencies and weaknesses in methodology, may need be addressed in future work.

  15. Orbit-related sea level errors for TOPEX altimetry at seasonal to decadal timescales

    Science.gov (United States)

    Esselborn, Saskia; Rudenko, Sergei; Schöne, Tilo

    2018-03-01

    Interannual to decadal sea level trends are indicators of climate variability and change. A major source of global and regional sea level data is satellite radar altimetry, which relies on precise knowledge of the satellite's orbit. Here, we assess the error budget of the radial orbit component for the TOPEX/Poseidon mission for the period 1993 to 2004 from a set of different orbit solutions. The errors for seasonal, interannual (5-year), and decadal periods are estimated on global and regional scales based on radial orbit differences from three state-of-the-art orbit solutions provided by different research teams: the German Research Centre for Geosciences (GFZ), the Groupe de Recherche de Géodésie Spatiale (GRGS), and the Goddard Space Flight Center (GSFC). The global mean sea level error related to orbit uncertainties is of the order of 1 mm (8 % of the global mean sea level variability) with negligible contributions on the annual and decadal timescales. In contrast, the orbit-related error of the interannual trend is 0.1 mm yr-1 (27 % of the corresponding sea level variability) and might hamper the estimation of an acceleration of the global mean sea level rise. For regional scales, the gridded orbit-related error is up to 11 mm, and for about half the ocean the orbit error accounts for at least 10 % of the observed sea level variability. The seasonal orbit error amounts to 10 % of the observed seasonal sea level signal in the Southern Ocean. At interannual and decadal timescales, the orbit-related trend uncertainties reach regionally more than 1 mm yr-1. The interannual trend errors account for 10 % of the observed sea level signal in the tropical Atlantic and the south-eastern Pacific. For decadal scales, the orbit-related trend errors are prominent in a several regions including the South Atlantic, western North Atlantic, central Pacific, South Australian Basin, and the Mediterranean Sea. Based on a set of test orbits calculated at GFZ, the sources of the

  16. Orbit-related sea level errors for TOPEX altimetry at seasonal to decadal timescales

    Directory of Open Access Journals (Sweden)

    S. Esselborn

    2018-03-01

    Full Text Available Interannual to decadal sea level trends are indicators of climate variability and change. A major source of global and regional sea level data is satellite radar altimetry, which relies on precise knowledge of the satellite's orbit. Here, we assess the error budget of the radial orbit component for the TOPEX/Poseidon mission for the period 1993 to 2004 from a set of different orbit solutions. The errors for seasonal, interannual (5-year, and decadal periods are estimated on global and regional scales based on radial orbit differences from three state-of-the-art orbit solutions provided by different research teams: the German Research Centre for Geosciences (GFZ, the Groupe de Recherche de Géodésie Spatiale (GRGS, and the Goddard Space Flight Center (GSFC. The global mean sea level error related to orbit uncertainties is of the order of 1 mm (8 % of the global mean sea level variability with negligible contributions on the annual and decadal timescales. In contrast, the orbit-related error of the interannual trend is 0.1 mm yr−1 (27 % of the corresponding sea level variability and might hamper the estimation of an acceleration of the global mean sea level rise. For regional scales, the gridded orbit-related error is up to 11 mm, and for about half the ocean the orbit error accounts for at least 10 % of the observed sea level variability. The seasonal orbit error amounts to 10 % of the observed seasonal sea level signal in the Southern Ocean. At interannual and decadal timescales, the orbit-related trend uncertainties reach regionally more than 1 mm yr−1. The interannual trend errors account for 10 % of the observed sea level signal in the tropical Atlantic and the south-eastern Pacific. For decadal scales, the orbit-related trend errors are prominent in a several regions including the South Atlantic, western North Atlantic, central Pacific, South Australian Basin, and the Mediterranean Sea. Based on a set of test

  17. Bayesian networks modeling for thermal error of numerical control machine tools

    Institute of Scientific and Technical Information of China (English)

    Xin-hua YAO; Jian-zhong FU; Zi-chen CHEN

    2008-01-01

    The interaction between the heat source location,its intensity,thermal expansion coefficient,the machine system configuration and the running environment creates complex thermal behavior of a machine tool,and also makes thermal error prediction difficult.To address this issue,a novel prediction method for machine tool thermal error based on Bayesian networks (BNs) was presented.The method described causal relationships of factors inducing thermal deformation by graph theory and estimated the thermal error by Bayesian statistical techniques.Due to the effective combination of domain knowledge and sampled data,the BN method could adapt to the change of running state of machine,and obtain satisfactory prediction accuracy.Ex-periments on spindle thermal deformation were conducted to evaluate the modeling performance.Experimental results indicate that the BN method performs far better than the least squares(LS)analysis in terms of modeling estimation accuracy.

  18. A modified backpropagation algorithm for training neural networks on data with error bars

    International Nuclear Information System (INIS)

    Gernoth, K.A.; Clark, J.W.

    1994-08-01

    A method is proposed for training multilayer feedforward neural networks on data contaminated with noise. Specifically, we consider the case that the artificial neural system is required to learn a physical mapping when the available values of the target variable are subject to experimental uncertainties, but are characterized by error bars. The proposed method, based on maximum likelihood criterion for parameter estimation, involves simple modifications of the on-line backpropagation learning algorithm. These include incorporation of the error-bar assignments in a pattern-specific learning rate, together with epochal updating of a new measure of model accuracy that replaces the usual mean-square error. The extended backpropagation algorithm is successfully tested on two problems relevant to the modelling of atomic-mass systematics by neural networks. Provided the underlying mapping is reasonably smooth, neural nets trained with the new procedure are able to learn the true function to a good approximation even in the presence of high levels of Gaussian noise. (author). 26 refs, 2 figs, 5 tabs

  19. Design of an error-free nondestructive plutonium assay facility

    International Nuclear Information System (INIS)

    Moore, C.B.; Steward, W.E.

    1987-01-01

    An automated, at-line nondestructive assay (NDA) laboratory is installed in facilities recently constructed at the Savannah River Plant. The laboratory will enhance nuclear materials accounting in new plutonium scrap and waste recovery facilities. The advantages of at-line NDA operations will not be realized if results are clouded by errors in analytical procedures, sample identification, record keeping, or techniques for extracting samples from process streams. Minimization of such errors has been a primary design objective for the new facility. Concepts for achieving that objective include mechanizing the administrative tasks of scheduling activities in the laboratory, identifying samples, recording and storing assay data, and transmitting results information to process control and materials accounting functions. These concepts have been implemented in an analytical computer system that is programmed to avoid the obvious sources of error encountered in laboratory operations. The laboratory computer exchanges information with process control and materials accounting computers, transmitting results information and obtaining process data and accounting information as required to guide process operations and maintain current records of materials flow through the new facility

  20. Performance, postmodernity and errors

    DEFF Research Database (Denmark)

    Harder, Peter

    2013-01-01

    speaker’s competency (note the –y ending!) reflects adaptation to the community langue, including variations. This reversal of perspective also reverses our understanding of the relationship between structure and deviation. In the heyday of structuralism, it was tempting to confuse the invariant system...... with the prestige variety, and conflate non-standard variation with parole/performance and class both as erroneous. Nowadays the anti-structural sentiment of present-day linguistics makes it tempting to confuse the rejection of ideal abstract structure with a rejection of any distinction between grammatical...... as deviant from the perspective of function-based structure and discuss to what extent the recognition of a community langue as a source of adaptive pressure may throw light on different types of deviation, including language handicaps and learner errors....