WorldWideScience

Sample records for relevant error sources

  1. Human error theory: relevance to nurse management.

    Science.gov (United States)

    Armitage, Gerry

    2009-03-01

    Describe, discuss and critically appraise human error theory and consider its relevance for nurse managers. Healthcare errors are a persistent threat to patient safety. Effective risk management and clinical governance depends on understanding the nature of error. This paper draws upon a wide literature from published works, largely from the field of cognitive psychology and human factors. Although the content of this paper is pertinent to any healthcare professional; it is written primarily for nurse managers. Error is inevitable. Causation is often attributed to individuals, yet causation in complex environments such as healthcare is predominantly multi-factorial. Individual performance is affected by the tendency to develop prepacked solutions and attention deficits, which can in turn be related to local conditions and systems or latent failures. Blame is often inappropriate. Defences should be constructed in the light of these considerations and to promote error wisdom and organizational resilience. Managing and learning from error is seen as a priority in the British National Health Service (NHS), this can be better achieved with an understanding of the roots, nature and consequences of error. Such an understanding can provide a helpful framework for a range of risk management activities.

  2. Structural Model Error and Decision Relevancy

    Science.gov (United States)

    Goldsby, M.; Lusk, G.

    2017-12-01

    The extent to which climate models can underwrite specific climate policies has long been a contentious issue. Skeptics frequently deny that climate models are trustworthy in an attempt to undermine climate action, whereas policy makers often desire information that exceeds the capabilities of extant models. While not skeptics, a group of mathematicians and philosophers [Frigg et al. (2014)] recently argued that even tiny differences between the structure of a complex dynamical model and its target system can lead to dramatic predictive errors, possibly resulting in disastrous consequences when policy decisions are based upon those predictions. They call this result the Hawkmoth effect (HME), and seemingly use it to rebuke rightwing proposals to forgo mitigation in favor of adaptation. However, a vigorous debate has emerged between Frigg et al. on one side and another philosopher-mathematician pair [Winsberg and Goodwin (2016)] on the other. On one hand, Frigg et al. argue that their result shifts the burden to climate scientists to demonstrate that their models do not fall prey to the HME. On the other hand, Winsberg and Goodwin suggest that arguments like those asserted by Frigg et al. can be, if taken seriously, "dangerous": they fail to consider the variety of purposes for which models can be used, and thus too hastily undermine large swaths of climate science. They put the burden back on Frigg et al. to show their result has any effect on climate science. This paper seeks to attenuate this debate by establishing an irenic middle position; we find that there is more agreement between sides than it first seems. We distinguish a `decision standard' from a `burden of proof', which helps clarify the contributions to the debate from both sides. In making this distinction, we argue that scientists bear the burden of assessing the consequences of HME, but that the standard Frigg et al. adopt for decision relevancy is too strict.

  3. Sources of medical error in refractive surgery.

    Science.gov (United States)

    Moshirfar, Majid; Simpson, Rachel G; Dave, Sonal B; Christiansen, Steven M; Edmonds, Jason N; Culbertson, William W; Pascucci, Stephen E; Sher, Neal A; Cano, David B; Trattler, William B

    2013-05-01

    To evaluate the causes of laser programming errors in refractive surgery and outcomes in these cases. In this multicenter, retrospective chart review, 22 eyes of 18 patients who had incorrect data entered into the refractive laser computer system at the time of treatment were evaluated. Cases were analyzed to uncover the etiology of these errors, patient follow-up treatments, and final outcomes. The results were used to identify potential methods to avoid similar errors in the future. Every patient experienced compromised uncorrected visual acuity requiring additional intervention, and 7 of 22 eyes (32%) lost corrected distance visual acuity (CDVA) of at least one line. Sixteen patients were suitable candidates for additional surgical correction to address these residual visual symptoms and six were not. Thirteen of 22 eyes (59%) received surgical follow-up treatment; nine eyes were treated with contact lenses. After follow-up treatment, six patients (27%) still had a loss of one line or more of CDVA. Three significant sources of error were identified: errors of cylinder conversion, data entry, and patient identification error. Twenty-seven percent of eyes with laser programming errors ultimately lost one or more lines of CDVA. Patients who underwent surgical revision had better outcomes than those who did not. Many of the mistakes identified were likely avoidable had preventive measures been taken, such as strict adherence to patient verification protocol or rigorous rechecking of treatment parameters. Copyright 2013, SLACK Incorporated.

  4. Sources of Error in Satellite Navigation Positioning

    Directory of Open Access Journals (Sweden)

    Jacek Januszewski

    2017-09-01

    Full Text Available An uninterrupted information about the user’s position can be obtained generally from satellite navigation system (SNS. At the time of this writing (January 2017 currently two global SNSs, GPS and GLONASS, are fully operational, two next, also global, Galileo and BeiDou are under construction. In each SNS the accuracy of the user’s position is affected by the three main factors: accuracy of each satellite position, accuracy of pseudorange measurement and satellite geometry. The user’s position error is a function of both the pseudorange error called UERE (User Equivalent Range Error and user/satellite geometry expressed by right Dilution Of Precision (DOP coefficient. This error is decomposed into two types of errors: the signal in space ranging error called URE (User Range Error and the user equipment error UEE. The detailed analyses of URE, UEE, UERE and DOP coefficients, and the changes of DOP coefficients in different days are presented in this paper.

  5. Probable sources of errors in radiation therapy (abstract)

    International Nuclear Information System (INIS)

    Khan, U.H.

    1998-01-01

    It is fact that some errors are always in dose-volume prescription, management of radiation beam, derivation of exposure, planning the treatment and finally the treatment of the patient ( a three dimensional subject). This paper highlights all the sources of error and relevant methods to decrease or eliminate them, thus improving the over-all therapeutic efficiency and accuracy. It is a comprehensive teamwork of the radiotherapist, medical radiation physicist, medical technologist and the patient. All the links, in the whole chain of radiotherapy, are equally important and duly considered in the paper. The decision for Palliative or Radical treatment is based on the nature and extent disease, site, stage, grade, length of the history of condition and biopsy reports etc. This may entail certain uncertainties in Volume of tumor, quality and quantity of radiation and dose fractionation etc, which may be under or over-estimated. An effort has been made to guide the radiotherapist in avoiding the pitfalls in the arena of radiotherapy. (author)

  6. A posteriori error estimates in voice source recovery

    Science.gov (United States)

    Leonov, A. S.; Sorokin, V. N.

    2017-12-01

    The inverse problem of voice source pulse recovery from a segment of a speech signal is under consideration. A special mathematical model is used for the solution that relates these quantities. A variational method of solving inverse problem of voice source recovery for a new parametric class of sources, that is for piecewise-linear sources (PWL-sources), is proposed. Also, a technique for a posteriori numerical error estimation for obtained solutions is presented. A computer study of the adequacy of adopted speech production model with PWL-sources is performed in solving the inverse problems for various types of voice signals, as well as corresponding study of a posteriori error estimates. Numerical experiments for speech signals show satisfactory properties of proposed a posteriori error estimates, which represent the upper bounds of possible errors in solving the inverse problem. The estimate of the most probable error in determining the source-pulse shapes is about 7-8% for the investigated speech material. It is noted that a posteriori error estimates can be used as a criterion of the quality for obtained voice source pulses in application to speaker recognition.

  7. Optical linear algebra processors - Noise and error-source modeling

    Science.gov (United States)

    Casasent, D.; Ghosh, A.

    1985-01-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAPs) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  8. Optical linear algebra processors: noise and error-source modeling.

    Science.gov (United States)

    Casasent, D; Ghosh, A

    1985-06-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAP's) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  9. Clinical relevance of and risk factors associated with medication administration time errors

    NARCIS (Netherlands)

    Teunissen, R.; Bos, J.; Pot, H.; Pluim, M.; Kramers, C.

    2013-01-01

    PURPOSE: The clinical relevance of and risk factors associated with errors related to medication administration time were studied. METHODS: In this explorative study, 66 medication administration rounds were studied on two wards (surgery and neurology) of a hospital. Data on medication errors were

  10. Sources of variability and systematic error in mouse timing behavior.

    Science.gov (United States)

    Gallistel, C R; King, Adam; McDonald, Robert

    2004-01-01

    In the peak procedure, starts and stops in responding bracket the target time at which food is expected. The variability in start and stop times is proportional to the target time (scalar variability), as is the systematic error in the mean center (scalar error). The authors investigated the source of the error and the variability, using head poking in the mouse, with target intervals of 5 s, 15 s, and 45 s, in the standard procedure, and in a variant with 3 different target intervals at 3 different locations in a single trial. The authors conclude that the systematic error is due to the asymmetric location of start and stop decision criteria, and the scalar variability derives primarily from sources other than memory.

  11. The VTTVIS line imaging spectrometer - principles, error sources, and calibration

    DEFF Research Database (Denmark)

    Jørgensen, R.N.

    2002-01-01

    work describing the basic principles, potential error sources, and/or adjustment and calibration procedures. This report fulfils the need for such documentationwith special focus on the system at KVL. The PGP based system has several severe error sources, which should be removed prior any analysis......Hyperspectral imaging with a spatial resolution of a few mm2 has proved to have a great potential within crop and weed classification and also within nutrient diagnostics. A commonly used hyperspectral imaging system is based on the Prism-Grating-Prism(PGP) principles produced by Specim Ltd...... in off-axis transmission efficiencies, diffractionefficiencies, and image distortion have a significant impact on the instrument performance. Procedures removing or minimising these systematic error sources are developed and described for the system build at KVL but can be generalised to other PGP...

  12. The error sources appearing for the gamma radioactive source measurement in dynamic condition

    International Nuclear Information System (INIS)

    Sirbu, M.

    1977-01-01

    The error analysis for the measurement of the gamma radioactive sources, placed on the soil, with the help of the helicopter are presented. The analysis is based on a new formula that takes account of the attenuation gamma ray factor in the helicopter walls. They give a complete error formula and an error diagram. (author)

  13. Source position error influence on industry CT image quality

    International Nuclear Information System (INIS)

    Cong Peng; Li Zhipeng; Wu Haifeng

    2004-01-01

    Based on the emulational exercise, the influence of source position error on industry CT (ICT) image quality was studied and the valuable parameters were obtained for the design of ICT. The vivid container CT image was also acquired from the CT testing system. (authors)

  14. Water displacement leg volumetry in clinical studies - A discussion of error sources

    Science.gov (United States)

    2010-01-01

    Background Water displacement leg volumetry is a highly reproducible method, allowing the confirmation of efficacy of vasoactive substances. Nevertheless errors of its execution and the selection of unsuitable patients are likely to negatively affect the outcome of clinical studies in chronic venous insufficiency (CVI). Discussion Placebo controlled double-blind drug studies in CVI were searched (Cochrane Review 2005, MedLine Search until December 2007) and assessed with regard to efficacy (volume reduction of the leg), patient characteristics, and potential methodological error sources. Almost every second study reported only small drug effects (≤ 30 mL volume reduction). As the most relevant error source the conduct of volumetry was identified. Because the practical use of available equipment varies, volume differences of more than 300 mL - which is a multifold of a potential treatment effect - have been reported between consecutive measurements. Other potential error sources were insufficient patient guidance or difficulties with the transition from the Widmer CVI classification to the CEAP (Clinical Etiological Anatomical Pathophysiological) grading. Summary Patients should be properly diagnosed with CVI and selected for stable oedema and further clinical symptoms relevant for the specific study. Centres require a thorough training on the use of the volumeter and on patient guidance. Volumetry should be performed under constant conditions. The reproducibility of short term repeat measurements has to be ensured. PMID:20070899

  15. Per-beam, planar IMRT QA passing rates do not predict clinically relevant patient dose errors

    Energy Technology Data Exchange (ETDEWEB)

    Nelms, Benjamin E.; Zhen Heming; Tome, Wolfgang A. [Canis Lupus LLC and Department of Human Oncology, University of Wisconsin, Merrimac, Wisconsin 53561 (United States); Department of Medical Physics, University of Wisconsin, Madison, Wisconsin 53705 (United States); Departments of Human Oncology, Medical Physics, and Biomedical Engineering, University of Wisconsin, Madison, Wisconsin 53792 (United States)

    2011-02-15

    Purpose: The purpose of this work is to determine the statistical correlation between per-beam, planar IMRT QA passing rates and several clinically relevant, anatomy-based dose errors for per-patient IMRT QA. The intent is to assess the predictive power of a common conventional IMRT QA performance metric, the Gamma passing rate per beam. Methods: Ninety-six unique data sets were created by inducing four types of dose errors in 24 clinical head and neck IMRT plans, each planned with 6 MV Varian 120-leaf MLC linear accelerators using a commercial treatment planning system and step-and-shoot delivery. The error-free beams/plans were used as ''simulated measurements'' (for generating the IMRT QA dose planes and the anatomy dose metrics) to compare to the corresponding data calculated by the error-induced plans. The degree of the induced errors was tuned to mimic IMRT QA passing rates that are commonly achieved using conventional methods. Results: Analysis of clinical metrics (parotid mean doses, spinal cord max and D1cc, CTV D95, and larynx mean) vs IMRT QA Gamma analysis (3%/3 mm, 2/2, 1/1) showed that in all cases, there were only weak to moderate correlations (range of Pearson's r-values: -0.295 to 0.653). Moreover, the moderate correlations actually had positive Pearson's r-values (i.e., clinically relevant metric differences increased with increasing IMRT QA passing rate), indicating that some of the largest anatomy-based dose differences occurred in the cases of high IMRT QA passing rates, which may be called ''false negatives.'' The results also show numerous instances of false positives or cases where low IMRT QA passing rates do not imply large errors in anatomy dose metrics. In none of the cases was there correlation consistent with high predictive power of planar IMRT passing rates, i.e., in none of the cases did high IMRT QA Gamma passing rates predict low errors in anatomy dose metrics or vice versa

  16. Per-beam, planar IMRT QA passing rates do not predict clinically relevant patient dose errors

    International Nuclear Information System (INIS)

    Nelms, Benjamin E.; Zhen Heming; Tome, Wolfgang A.

    2011-01-01

    Purpose: The purpose of this work is to determine the statistical correlation between per-beam, planar IMRT QA passing rates and several clinically relevant, anatomy-based dose errors for per-patient IMRT QA. The intent is to assess the predictive power of a common conventional IMRT QA performance metric, the Gamma passing rate per beam. Methods: Ninety-six unique data sets were created by inducing four types of dose errors in 24 clinical head and neck IMRT plans, each planned with 6 MV Varian 120-leaf MLC linear accelerators using a commercial treatment planning system and step-and-shoot delivery. The error-free beams/plans were used as ''simulated measurements'' (for generating the IMRT QA dose planes and the anatomy dose metrics) to compare to the corresponding data calculated by the error-induced plans. The degree of the induced errors was tuned to mimic IMRT QA passing rates that are commonly achieved using conventional methods. Results: Analysis of clinical metrics (parotid mean doses, spinal cord max and D1cc, CTV D95, and larynx mean) vs IMRT QA Gamma analysis (3%/3 mm, 2/2, 1/1) showed that in all cases, there were only weak to moderate correlations (range of Pearson's r-values: -0.295 to 0.653). Moreover, the moderate correlations actually had positive Pearson's r-values (i.e., clinically relevant metric differences increased with increasing IMRT QA passing rate), indicating that some of the largest anatomy-based dose differences occurred in the cases of high IMRT QA passing rates, which may be called ''false negatives.'' The results also show numerous instances of false positives or cases where low IMRT QA passing rates do not imply large errors in anatomy dose metrics. In none of the cases was there correlation consistent with high predictive power of planar IMRT passing rates, i.e., in none of the cases did high IMRT QA Gamma passing rates predict low errors in anatomy dose metrics or vice versa. Conclusions: There is a lack of correlation between

  17. Safety analysis methodology with assessment of the impact of the prediction errors of relevant parameters

    International Nuclear Information System (INIS)

    Galia, A.V.

    2011-01-01

    The best estimate plus uncertainty approach (BEAU) requires the use of extensive resources and therefore it is usually applied for cases in which the available safety margin obtained with a conservative methodology can be questioned. Outside the BEAU methodology, there is not a clear approach on how to deal with the issue of considering the uncertainties resulting from prediction errors in the safety analyses performed for licensing submissions. However, the regulatory document RD-310 mentions that the analysis method shall account for uncertainties in the analysis data and models. A possible approach is presented, that is simple and reasonable, representing just the author's views, to take into account the impact of prediction errors and other uncertainties when performing safety analysis in line with regulatory requirements. The approach proposes taking into account the prediction error of relevant parameters. Relevant parameters would be those plant parameters that are surveyed and are used to initiate the action of a mitigating system or those that are representative of the most challenging phenomena for the integrity of a fission barrier. Examples of the application of the methodology are presented involving a comparison between the results with the new approach and a best estimate calculation during the blowdown phase for two small breaks in a generic CANDU 6 station. The calculations are performed with the CATHENA computer code. (author)

  18. Rotational patient setup errors in IGRT with XVI system in Elekta Synergy and their clinical relevance

    International Nuclear Information System (INIS)

    Madhusudhana Sresty, N.V.N.; Muralidhar, K.R.; Raju, A.K.; Sha, R.L.; Ramanjappa

    2008-01-01

    The goal of Image Guided Radiotherapy (IGRT) is to improve the accuracy of treatment delivery. In this technique, it is possible to get volumetric images of patient anatomy before delivery of treatment.XVI( release 3.5) system in Elekta Synergy linear accelerator (Elekta,Crawley,UK) has the potential to ensure that, the relative positions of the target volume is same as in the treatment plan. It involves acquiring planar images produced by a kilo Voltage cone beam rotating about the patient in the treatment position. After 3 dimensional match between reference and localization images, the system gives rotational errors also along with translational shifts. One can easily perform translational shifts with treatment couch. But rotational shifts cannot be performed. Most of the studies dealt with translational shifts only. Few studies reported regarding rotational errors. It is found that in the treatment of elongated targets, even small rotational errors can show difference in results. The main objectives of this study is 1) To verify the magnitude of rotational errors in different clinical sites observed and to compare with the other reports. 2) To find its clinical relevance 3) To find difference in rotational shift results with improper selection of kV collimator

  19. A methodology for translating positional error into measures of attribute error, and combining the two error sources

    Science.gov (United States)

    Yohay Carmel; Curtis Flather; Denis Dean

    2006-01-01

    This paper summarizes our efforts to investigate the nature, behavior, and implications of positional error and attribute error in spatiotemporal datasets. Estimating the combined influence of these errors on map analysis has been hindered by the fact that these two error types are traditionally expressed in different units (distance units, and categorical units,...

  20. Error Analysis of CM Data Products Sources of Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Hunt, Brian D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eckert-Gallup, Aubrey Celia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Cochran, Lainy Dromgoole [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kraus, Terrence D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Allen, Mark B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Beal, Bill [National Security Technologies, Joint Base Andrews, MD (United States); Okada, Colin [National Security Technologies, LLC. (NSTec), Las Vegas, NV (United States); Simpson, Mathew [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-02-01

    This goal of this project is to address the current inability to assess the overall error and uncertainty of data products developed and distributed by DOE’s Consequence Management (CM) Program. This is a widely recognized shortfall, the resolution of which would provide a great deal of value and defensibility to the analysis results, data products, and the decision making process that follows this work. A global approach to this problem is necessary because multiple sources of error and uncertainty contribute to the ultimate production of CM data products. Therefore, this project will require collaboration with subject matter experts across a wide range of FRMAC skill sets in order to quantify the types of uncertainty that each area of the CM process might contain and to understand how variations in these uncertainty sources contribute to the aggregated uncertainty present in CM data products. The ultimate goal of this project is to quantify the confidence level of CM products to ensure that appropriate public and worker protections decisions are supported by defensible analysis.

  1. Identification of 'Point A' as the prevalent source of error in cephalometric analysis of lateral radiographs.

    Science.gov (United States)

    Grogger, P; Sacher, C; Weber, S; Millesi, G; Seemann, R

    2018-04-10

    Deviations in measuring dentofacial components in a lateral X-ray represent a major hurdle in the subsequent treatment of dysgnathic patients. In a retrospective study, we investigated the most prevalent source of error in the following commonly used cephalometric measurements: the angles Sella-Nasion-Point A (SNA), Sella-Nasion-Point B (SNB) and Point A-Nasion-Point B (ANB); the Wits appraisal; the anteroposterior dysplasia indicator (APDI); and the overbite depth indicator (ODI). Preoperative lateral radiographic images of patients with dentofacial deformities were collected and the landmarks digitally traced by three independent raters. Cephalometric analysis was automatically performed based on 1116 tracings. Error analysis identified the x-coordinate of Point A as the prevalent source of error in all investigated measurements, except SNB, in which it is not incorporated. In SNB, the y-coordinate of Nasion predominated error variance. SNB showed lowest inter-rater variation. In addition, our observations confirmed previous studies showing that landmark identification variance follows characteristic error envelopes in the highest number of tracings analysed up to now. Variance orthogonal to defining planes was of relevance, while variance parallel to planes was not. Taking these findings into account, orthognathic surgeons as well as orthodontists would be able to perform cephalometry more accurately and accomplish better therapeutic results. Copyright © 2018 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  2. Identification errors in the blood transfusion laboratory: a still relevant issue for patient safety.

    Science.gov (United States)

    Lippi, Giuseppe; Plebani, Mario

    2011-04-01

    Remarkable technological advances and increased awareness have both contributed to decrease substantially the uncertainty of the analytical phase, so that the manually intensive preanalytical activities currently represent the leading sources of errors in laboratory and transfusion medicine. Among preanalytical errors, misidentification and mistransfusion are still regarded as a considerable problem, posing serious risks for patient health and carrying huge expenses for the healthcare system. As such, a reliable policy of risk management should be readily implemented, developing through a multifaceted approach to prevent or limit the adverse outcomes related to transfusion reactions from blood incompatibility. This strategy encompasses root cause analysis, compliance with accreditation requirements, strict adherence to standard operating procedures, guidelines and recommendations for specimen collection, use of positive identification devices, rejection of potentially misidentified specimens, informatics data entry, query host communication, automated systems for patient identification and sample labeling and an adequate and safe environment. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Study on analysis from sources of error for Airborne LIDAR

    Science.gov (United States)

    Ren, H. C.; Yan, Q.; Liu, Z. J.; Zuo, Z. Q.; Xu, Q. Q.; Li, F. F.; Song, C.

    2016-11-01

    With the advancement of Aerial Photogrammetry, it appears that to obtain geo-spatial information of high spatial and temporal resolution provides a new technical means for Airborne LIDAR measurement techniques, with unique advantages and broad application prospects. Airborne LIDAR is increasingly becoming a new kind of space for earth observation technology, which is mounted by launching platform for aviation, accepting laser pulses to get high-precision, high-density three-dimensional coordinate point cloud data and intensity information. In this paper, we briefly demonstrates Airborne laser radar systems, and that some errors about Airborne LIDAR data sources are analyzed in detail, so the corresponding methods is put forwarded to avoid or eliminate it. Taking into account the practical application of engineering, some recommendations were developed for these designs, which has crucial theoretical and practical significance in Airborne LIDAR data processing fields.

  4. Reduction of sources of error and simplification of the Carbon-14 urea breath test

    International Nuclear Information System (INIS)

    Bellon, M.S.

    1997-01-01

    Full text: Carbon-14 urea breath testing is established in the diagnosis of H. pylori infection. The aim of this study was to investigate possible further simplification and identification of error sources in the 14 C urea kit extensively used at the Royal Adelaide Hospital. Thirty six patients with validated H. pylon status were tested with breath samples taken at 10,15, and 20 min. Using the single sample value at 15 min, there was no change in the diagnostic category. Reduction or errors in analysis depends on attention to the following details: Stability of absorption solution, (now > 2 months), compatibility of scintillation cocktail/absorption solution. (with particular regard to photoluminescence and chemiluminescence), reduction in chemical quenching (moisture reduction), understanding counting hardware and relevance, and appropriate response to deviation in quality assurance. With this experience, we are confident of the performance and reliability of the RAPID-14 urea breath test kit now available commercially

  5. Spelling Errors of Iranian School-Level EFL Learners: Potential Sources

    Directory of Open Access Journals (Sweden)

    Mahnaz Saeidi

    2010-05-01

    Full Text Available With the purpose of examining the sources of spelling errors of Iranian school level EFL learners, the present researchers analyzed the dictation samples of 51 Iranian senior and junior high school male and female students majoring at an Iranian school in Baku, Azerbaijan. The content analysis of the data revealed three main sources (intralingual, interlingual, and unique with seven patterns of errors. The frequency of intralingual errors far outnumbers that of interlingual errors. Unique errors were even less. Therefore, in-service training programs may include some instruction on raising the teachers’ awareness of the different sources of errors to focus on during the teaching program.

  6. Soft error evaluation in SRAM using α sources

    International Nuclear Information System (INIS)

    He Chaohui; Chu Jun; Ren Xueming; Xia Chunmei; Yang Xiupei; Zhang Weiwei; Wang Hongquan; Xiao Jiangbo; Li Xiaolin

    2006-01-01

    Soft errors in memories influence directly the reliability of products. To compare the ability of three different memories against soft errors by experiments of alpha particles irradiation, the numbers of soft errors are measured for three different SRAMs and the cross sections of single event upset (SEU) and failures in time (FIT) are calculated. According to the cross sections of SEU, the ability of A166M against soft errors is the best and then B166M, the last B200M. The average FIT of B166M is smaller than that of B200M, and that of A166M is the biggest among them. (authors)

  7. SU-E-T-789: Validation of 3DVH Accuracy On Quantifying Delivery Errors Based On Clinical Relevant DVH Metrics

    International Nuclear Information System (INIS)

    Ma, T; Kumaraswamy, L

    2015-01-01

    Purpose: Detection of treatment delivery errors is important in radiation therapy. However, accurate quantification of delivery errors is also of great importance. This study aims to evaluate the 3DVH software’s ability to accurately quantify delivery errors. Methods: Three VMAT plans (prostate, H&N and brain) were randomly chosen for this study. First, we evaluated whether delivery errors could be detected by gamma evaluation. Conventional per-beam IMRT QA was performed with the ArcCHECK diode detector for the original plans and for the following modified plans: (1) induced dose difference error up to ±4.0% and (2) control point (CP) deletion (3 to 10 CPs were deleted) (3) gantry angle shift error (3 degree uniformly shift). 2D and 3D gamma evaluation were performed for all plans through SNC Patient and 3DVH, respectively. Subsequently, we investigated the accuracy of 3DVH analysis for all cases. This part evaluated, using the Eclipse TPS plans as standard, whether 3DVH accurately can model the changes in clinically relevant metrics caused by the delivery errors. Results: 2D evaluation seemed to be more sensitive to delivery errors. The average differences between ECLIPSE predicted and 3DVH results for each pair of specific DVH constraints were within 2% for all three types of error-induced treatment plans, illustrating the fact that 3DVH is fairly accurate in quantifying the delivery errors. Another interesting observation was that even though the gamma pass rates for the error plans are high, the DVHs showed significant differences between original plan and error-induced plans in both Eclipse and 3DVH analysis. Conclusion: The 3DVH software is shown to accurately quantify the error in delivered dose based on clinically relevant DVH metrics, where a conventional gamma based pre-treatment QA might not necessarily detect

  8. The Relevance of Second Language Acquisition Theory to the Written Error Correction Debate

    Science.gov (United States)

    Polio, Charlene

    2012-01-01

    The controversies surrounding written error correction can be traced to Truscott (1996) in his polemic against written error correction. He claimed that empirical studies showed that error correction was ineffective and that this was to be expected "given the nature of the correction process and "the nature of language learning" (p. 328, emphasis…

  9. A survey of camera error sources in machine vision systems

    Science.gov (United States)

    Jatko, W. B.

    In machine vision applications, such as an automated inspection line, television cameras are commonly used to record scene intensity in a computer memory or frame buffer. Scene data from the image sensor can then be analyzed with a wide variety of feature-detection techniques. Many algorithms found in textbooks on image processing make the implicit simplifying assumption of an ideal input image with clearly defined edges and uniform illumination. The ideal image model is helpful to aid the student in understanding the principles of operation, but when these algorithms are blindly applied to real-world images the results can be unsatisfactory. This paper examines some common measurement errors found in camera sensors and their underlying causes, and possible methods of error compensation. The role of the camera in a typical image-processing system is discussed, with emphasis on the origination of signal distortions. The effects of such things as lighting, optics, and sensor characteristics are considered.

  10. Vector velocity volume flow estimation: Sources of error and corrections applied for arteriovenous fistulas

    DEFF Research Database (Denmark)

    Jensen, Jonas; Olesen, Jacob Bjerring; Stuart, Matthias Bo

    2016-01-01

    radius. The error sources were also studied in vivo under realistic clinical conditions, and the theoretical results were applied for correcting the volume flow errors. Twenty dialysis patients with arteriovenous fistulas were scanned to obtain vector flow maps of fistulas. When fitting an ellipsis......A method for vector velocity volume flow estimation is presented, along with an investigation of its sources of error and correction of actual volume flow measurements. Volume flow errors are quantified theoretically by numerical modeling, through flow phantom measurements, and studied in vivo...

  11. Detection of anomalies in radio tomography of asteroids: Source count and forward errors

    Science.gov (United States)

    Pursiainen, S.; Kaasalainen, M.

    2014-09-01

    The purpose of this study was to advance numerical methods for radio tomography in which asteroid's internal electric permittivity distribution is to be recovered from radio frequency data gathered by an orbiter. The focus was on signal generation via multiple sources (transponders) providing one potential, or even essential, scenario to be implemented in a challenging in situ measurement environment and within tight payload limits. As a novel feature, the effects of forward errors including noise and a priori uncertainty of the forward (data) simulation were examined through a combination of the iterative alternating sequential (IAS) inverse algorithm and finite-difference time-domain (FDTD) simulation of time evolution data. Single and multiple source scenarios were compared in two-dimensional localization of permittivity anomalies. Three different anomaly strengths and four levels of total noise were tested. Results suggest, among other things, that multiple sources can be necessary to obtain appropriate results, for example, to distinguish three separate anomalies with permittivity less or equal than half of the background value, relevant in recovery of internal cavities.

  12. The use of source memory to identify one's own episodic confusion errors.

    Science.gov (United States)

    Smith, S M; Tindell, D R; Pierce, B H; Gilliland, T R; Gerkens, D R

    2001-03-01

    In 4 category cued recall experiments, participants falsely recalled nonlist common members, a semantic confusion error. Errors were more likely if critical nonlist words were presented on an incidental task, causing source memory failures called episodic confusion errors. Participants could better identify the source of falsely recalled words if they had deeply processed the words on the incidental task. For deep but not shallow processing, participants could reliably include or exclude incidentally shown category members in recall. The illusion that critical items actually appeared on categorized lists was diminished but not eradicated when participants identified episodic confusion errors post hoc among their own recalled responses; participants often believed that critical items had been on both the incidental task and the study list. Improved source monitoring can potentially mitigate episodic (but not semantic) confusion errors.

  13. Effects of errors on the dynamic aperture of the Advanced Photon Source storage ring

    International Nuclear Information System (INIS)

    Bizek, H.; Crosbie, E.; Lessner, E.; Teng, L.; Wirsbinski, J.

    1991-01-01

    The individual tolerance limits for alignment errors and magnet fabrication errors in the 7-GeV Advanced Photon Source storage ring are determined by computer-simulated tracking. Limits are established for dipole strength and roll errors, quadrupole strength and alignment errors, sextupole strength and alignment errors, as well as higher order multipole strengths in dipole and quadrupole magnets. The effects of girder misalignments on the dynamic aperture are also studied. Computer simulations are obtained with the tracking program RACETRACK, with errors introduced from a user-defined Gaussian distribution, truncated at ±5 standard deviation units. For each error, the average and rms spread of the stable amplitudes are determined for ten distinct machines, defined as ten different seeds to the random distribution, and for five distinct initial directions of the tracking particle. 4 refs., 4 figs., 1 tab

  14. International Test Comparisons: Reviewing Translation Error in Different Source Language-Target Language Combinations

    Science.gov (United States)

    Zhao, Xueyu; Solano-Flores, Guillermo; Qian, Ming

    2018-01-01

    This article addresses test translation review in international test comparisons. We investigated the applicability of the theory of test translation error--a theory of the multidimensionality and inevitability of test translation error--across source language-target language combinations in the translation of PISA (Programme of International…

  15. Identification of error sources in fatigue analyses for thermal loadings

    International Nuclear Information System (INIS)

    Binder, Franz; Gantz, Dieter

    2006-09-01

    To identify thermal loadings (thermal shocks and thermal stratification), in German NPPs, special fatigue monitoring systems have been installed. The detailed temperature measurement uses sheathed thermocouples, which are located on the external component surface. Tightening straps are used for the widespread method of locking the thermocouples into position. The calculation of material fatigue for a loading sequence has to be carried out based on the measured temperature profile of the outer component surface. Should the analysis comply with the ASME III code, Section NB, alternatively the Articles NB-3200 or NB-3600 can be applied. In fatigue analyses based on the outer-surface temperature, the thermal situation at the inner-surface has to be determined (inverse temperature-field calculation). This leading analysis step is not regulated in the ASME III code. Using general purpose finite element programs, this problem cannot be explicitly solved, because it requires knowledge of the thermal situation at all boundaries (temperature or heat transfer). In the frequently practiced method in a finite element calculation, the inner surface temperature profile is varied until a satisfactory compliance of the calculated outer surface temperature with the measured profile is obtained. Since the input parameters are derived from a variable field, the variation process is large-scale and non-explicit (another input-configuration may cause a similar outer surface temperature). Furthermore, the remaining deviation cannot be quantified regarding the resulting error in the calculated material fatigue. Five typical thermocouple installation methods existing in German LWRs were compared and evaluated regarding the quality of outer surface temperature acquisition. With the evaluation of the experimental data, the essential finding is that for the test transients the maximum of the true outer surface temperature change rate is registered incorrectly with all thermocouple

  16. Syndrome-source-coding and its universal generalization. [error correcting codes for data compression

    Science.gov (United States)

    Ancheta, T. C., Jr.

    1976-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.

  17. Cascade of neural events leading from error commission to subsequent awareness revealed using EEG source imaging.

    Directory of Open Access Journals (Sweden)

    Monica Dhar

    Full Text Available The goal of the present study was to shed light on the respective contributions of three important action monitoring brain regions (i.e. cingulate cortex, insula, and orbitofrontal cortex during the conscious detection of response errors. To this end, fourteen healthy adults performed a speeded Go/Nogo task comprising Nogo trials of varying levels of difficulty, designed to elicit aware and unaware errors. Error awareness was indicated by participants with a second key press after the target key press. Meanwhile, electromyogram (EMG from the response hand was recorded in addition to high-density scalp electroencephalogram (EEG. In the EMG-locked grand averages, aware errors clearly elicited an error-related negativity (ERN reflecting error detection, and a later error positivity (Pe reflecting conscious error awareness. However, no Pe was recorded after unaware errors or hits. These results are in line with previous studies suggesting that error awareness is associated with generation of the Pe. Source localisation results confirmed that the posterior cingulate motor area was the main generator of the ERN. However, inverse solution results also point to the involvement of the left posterior insula during the time interval of the Pe, and hence error awareness. Moreover, consecutive to this insular activity, the right orbitofrontal cortex (OFC was activated in response to aware and unaware errors but not in response to hits, consistent with the implication of this area in the evaluation of the value of an error. These results reveal a precise sequence of activations in these three non-overlapping brain regions following error commission, enabling a progressive differentiation between aware and unaware errors as a function of time elapsed, thanks to the involvement first of interoceptive or proprioceptive processes (left insula, later leading to the detection of a breach in the prepotent response mode (right OFC.

  18. Error-free versus mutagenic processing of genomic uracil--relevance to cancer.

    Science.gov (United States)

    Krokan, Hans E; Sætrom, Pål; Aas, Per Arne; Pettersen, Henrik Sahlin; Kavli, Bodil; Slupphaug, Geir

    2014-07-01

    Genomic uracil is normally processed essentially error-free by base excision repair (BER), with mismatch repair (MMR) as an apparent backup for U:G mismatches. Nuclear uracil-DNA glycosylase UNG2 is the major enzyme initiating BER of uracil of U:A pairs as well as U:G mismatches. Deficiency in UNG2 results in several-fold increases in genomic uracil in mammalian cells. Thus, the alternative uracil-removing glycosylases, SMUG1, TDG and MBD4 cannot efficiently complement UNG2-deficiency. A major function of SMUG1 is probably to remove 5-hydroxymethyluracil from DNA with general back-up for UNG2 as a minor function. TDG and MBD4 remove deamination products U or T mismatched to G in CpG/mCpG contexts, but may have equally or more important functions in development, epigenetics and gene regulation. Genomic uracil was previously thought to arise only from spontaneous cytosine deamination and incorporation of dUMP, generating U:G mismatches and U:A pairs, respectively. However, the identification of activation-induced cytidine deaminase (AID) and other APOBEC family members as DNA-cytosine deaminases has spurred renewed interest in the processing of genomic uracil. Importantly, AID triggers the adaptive immune response involving error-prone processing of U:G mismatches, but also contributes to B-cell lymphomagenesis. Furthermore, mutational signatures in a substantial fraction of other human cancers are consistent with APOBEC-induced mutagenesis, with U:G mismatches as prime suspects. Mutations can be caused by replicative polymerases copying uracil in U:G mismatches, or by translesion polymerases that insert incorrect bases opposite abasic sites after uracil-removal. In addition, kataegis, localized hypermutations in one strand in the vicinity of genomic rearrangements, requires APOBEC protein, UNG2 and translesion polymerase REV1. What mechanisms govern error-free versus error prone processing of uracil in DNA remains unclear. In conclusion, genomic uracil is an

  19. Isotopic abundances relevant to the identification of magma sources

    International Nuclear Information System (INIS)

    O'Nions, R.K.

    1984-01-01

    The behaviour of natural radiogenic isotope tracers in the Earth that have lithophile and atmophile geochemical affinity is reviewed. The isotope tracer signature of oceanic and continental crust may in favourable circumstances by sufficiently distinct from that of the mantle to render a contribution from these sources resolvable within the isotopic composition of the magma. Components derived from the sedimentary and altered basaltic portion of oceanic crust are recognized in some island arc magmas from their Sr, Nd and Pb isotopic signatures. The rare-gas isotope tracers (He, Ar, Xe in particular) are not readily recycled into the mantle and thus provide the basis of an approach that is complementary to that based on the lithophile tracers. In particular, a small mantle-derived helium component may be readily recognized in the presence of a predominant radiogenic component generated in the continents. The importance of assessing the mass balance of these interactions rather than merely a qualitative recognition is emphasized. The question of the relative, contribution of continental-oceanic crust and mantle to magma sources is an essential part of the problem of generation and evolution of continental crust. An approach to this problem through consideration of the isotopic composition of sediments is briefly discussed. (author)

  20. 50 CFR 424.13 - Sources of information and relevant data.

    Science.gov (United States)

    2010-10-01

    ... 50 Wildlife and Fisheries 7 2010-10-01 2010-10-01 false Sources of information and relevant data... Sources of information and relevant data. When considering any revision of the lists, the Secretary shall..., administrative reports, maps or other graphic materials, information received from experts on the subject, and...

  1. Measurement-device-independent quantum key distribution with correlated source-light-intensity errors

    Science.gov (United States)

    Jiang, Cong; Yu, Zong-Wen; Wang, Xiang-Bin

    2018-04-01

    We present an analysis for measurement-device-independent quantum key distribution with correlated source-light-intensity errors. Numerical results show that the results here can greatly improve the key rate especially with large intensity fluctuations and channel attenuation compared with prior results if the intensity fluctuations of different sources are correlated.

  2. Impact and quantification of the sources of error in DNA pooling designs.

    Science.gov (United States)

    Jawaid, A; Sham, P

    2009-01-01

    The analysis of genome wide variation offers the possibility of unravelling the genes involved in the pathogenesis of disease. Genome wide association studies are also particularly useful for identifying and validating targets for therapeutic intervention as well as for detecting markers for drug efficacy and side effects. The cost of such large-scale genetic association studies may be reduced substantially by the analysis of pooled DNA from multiple individuals. However, experimental errors inherent in pooling studies lead to a potential increase in the false positive rate and a loss in power compared to individual genotyping. Here we quantify various sources of experimental error using empirical data from typical pooling experiments and corresponding individual genotyping counts using two statistical methods. We provide analytical formulas for calculating these different errors in the absence of complete information, such as replicate pool formation, and for adjusting for the errors in the statistical analysis. We demonstrate that DNA pooling has the potential of estimating allele frequencies accurately, and adjusting the pooled allele frequency estimates for differential allelic amplification considerably improves accuracy. Estimates of the components of error show that differential allelic amplification is the most important contributor to the error variance in absolute allele frequency estimation, followed by allele frequency measurement and pool formation errors. Our results emphasise the importance of minimising experimental errors and obtaining correct error estimates in genetic association studies.

  3. Is the market size hypothesis relevant for Botswana? Vector error correction framework

    Directory of Open Access Journals (Sweden)

    Kunofiwa Tsaurai

    2015-10-01

    Full Text Available The current study investigated the relevancy of the market size hypothesis of FDI in Botswana in Botswana using the VECM approach with data ranging from 1975 to 2013. The study used FDI net inflows (% of GDP as a measure of FDI and GDP per capita as a proxy of market size. The findings of the study are threefold: (1 observed that there exists a long run uni-directional causality relationship running from GDP per capita to FDI in Botswana, (2 there is no long run causality running from FDI to GDP per capita in Botswana between 1975 and 2013 and (3 failed to establish any short run causality either from GDP per capita to FDI or from FDI to GDP per capita in Botswana. Although, GDP per capita of Botswana was a conditional characteristic that attracted FDI, Botswana did not economically benefit from FDI net inflows during the period from 1975 to 2013. The findings defied the theory that mentions that FDI brings into the host country an improvement of human capital development and technology improvement among other advantages which boost economic growth. Possibly, there are other host country characteristics that Botswana needs to address if it hopes to benefit from FDI. The current study recommends further research to find out which are the other conditional characteristics that Botswana authorities need to put in place in ensure that FDI inflows is translated into economic benefits for the country

  4. The accuracy of webcams in 2D motion analysis: sources of error and their control

    International Nuclear Information System (INIS)

    Page, A; Candelas, P; Belmar, F; Moreno, R

    2008-01-01

    In this paper, we show the potential of webcams as precision measuring instruments in a physics laboratory. Various sources of error appearing in 2D coordinate measurements using low-cost commercial webcams are discussed, quantifying their impact on accuracy and precision, and simple procedures to control these sources of error are presented. Finally, an experiment with controlled movement is performed to experimentally measure the errors described above and to assess the effectiveness of the proposed corrective measures. It will be shown that when these aspects are considered, it is possible to obtain errors lower than 0.1%. This level of accuracy demonstrates that webcams should be considered as very precise and accurate measuring instruments at a remarkably low cost

  5. The accuracy of webcams in 2D motion analysis: sources of error and their control

    Energy Technology Data Exchange (ETDEWEB)

    Page, A; Candelas, P; Belmar, F [Departamento de Fisica Aplicada, Universidad Politecnica de Valencia, Valencia (Spain); Moreno, R [Instituto de Biomecanica de Valencia, Valencia (Spain)], E-mail: alvaro.page@ibv.upv.es

    2008-07-15

    In this paper, we show the potential of webcams as precision measuring instruments in a physics laboratory. Various sources of error appearing in 2D coordinate measurements using low-cost commercial webcams are discussed, quantifying their impact on accuracy and precision, and simple procedures to control these sources of error are presented. Finally, an experiment with controlled movement is performed to experimentally measure the errors described above and to assess the effectiveness of the proposed corrective measures. It will be shown that when these aspects are considered, it is possible to obtain errors lower than 0.1%. This level of accuracy demonstrates that webcams should be considered as very precise and accurate measuring instruments at a remarkably low cost.

  6. Unaccounted source of systematic errors in measurements of the Newtonian gravitational constant G

    Science.gov (United States)

    DeSalvo, Riccardo

    2015-06-01

    Many precision measurements of G have produced a spread of results incompatible with measurement errors. Clearly an unknown source of systematic errors is at work. It is proposed here that most of the discrepancies derive from subtle deviations from Hooke's law, caused by avalanches of entangled dislocations. The idea is supported by deviations from linearity reported by experimenters measuring G, similarly to what is observed, on a larger scale, in low-frequency spring oscillators. Some mitigating experimental apparatus modifications are suggested.

  7. Sensitivity of Magnetospheric Multi-Scale (MMS) Mission Navigation Accuracy to Major Error Sources

    Science.gov (United States)

    Olson, Corwin; Long, Anne; Car[emter. Russell

    2011-01-01

    The Magnetospheric Multiscale (MMS) mission consists of four satellites flying in formation in highly elliptical orbits about the Earth, with a primary objective of studying magnetic reconnection. The baseline navigation concept is independent estimation of each spacecraft state using GPS pseudorange measurements referenced to an Ultra Stable Oscillator (USO) with accelerometer measurements included during maneuvers. MMS state estimation is performed onboard each spacecraft using the Goddard Enhanced Onboard Navigation System (GEONS), which is embedded in the Navigator GPS receiver. This paper describes the sensitivity of MMS navigation performance to two major error sources: USO clock errors and thrust acceleration knowledge errors.

  8. Sensitivity of Magnetospheric Multi-Scale (MMS) Mission Naviation Accuracy to Major Error Sources

    Science.gov (United States)

    Olson, Corwin; Long, Anne; Carpenter, J. Russell

    2011-01-01

    The Magnetospheric Multiscale (MMS) mission consists of four satellites flying in formation in highly elliptical orbits about the Earth, with a primary objective of studying magnetic reconnection. The baseline navigation concept is independent estimation of each spacecraft state using GPS pseudorange measurements referenced to an Ultra Stable Oscillator (USO) with accelerometer measurements included during maneuvers. MMS state estimation is performed onboard each spacecraft using the Goddard Enhanced Onboard Navigation System (GEONS), which is embedded in the Navigator GPS receiver. This paper describes the sensitivity of MMS navigation performance to two major error sources: USO clock errors and thrust acceleration knowledge errors.

  9. Partial and specific source memory for faces associated to other- and self-relevant negative contexts.

    Science.gov (United States)

    Bell, Raoul; Giang, Trang; Buchner, Axel

    2012-01-01

    Previous research has shown a source memory advantage for faces presented in negative contexts. As yet it remains unclear whether participants remember the specific type of context in which the faces were presented or whether they can only remember that the face was associated with negative valence. In the present study, participants saw faces together with descriptions of two different types of negative behaviour and neutral behaviour. In Experiment 1, we examined whether the participants were able to discriminate between two types of other-relevant negative context information (cheating and disgusting behaviour) in a source memory test. In Experiment 2, we assessed source memory for other-relevant negative (threatening) context information (other-aggressive behaviour) and self-relevant negative context information (self-aggressive behaviour). A multinomial source memory model was used to separately assess partial source memory for the negative valence of the behaviour and specific source memory for the particular type of negative context the face was associated with. In Experiment 1, source memory was specific for the particular type of negative context presented (i.e., cheating or disgusting behaviour). Experiment 2 showed that source memory for other-relevant negative information was more specific than source memory for self-relevant information. Thus, emotional source memory may vary in specificity depending on the degree to which the negative emotional context is perceived as threatening.

  10. Adaptation to sensory-motor reflex perturbations is blind to the source of errors.

    Science.gov (United States)

    Hudson, Todd E; Landy, Michael S

    2012-01-06

    In the study of visual-motor control, perhaps the most familiar findings involve adaptation to externally imposed movement errors. Theories of visual-motor adaptation based on optimal information processing suppose that the nervous system identifies the sources of errors to effect the most efficient adaptive response. We report two experiments using a novel perturbation based on stimulating a visually induced reflex in the reaching arm. Unlike adaptation to an external force, our method induces a perturbing reflex within the motor system itself, i.e., perturbing forces are self-generated. This novel method allows a test of the theory that error source information is used to generate an optimal adaptive response. If the self-generated source of the visually induced reflex perturbation is identified, the optimal response will be via reflex gain control. If the source is not identified, a compensatory force should be generated to counteract the reflex. Gain control is the optimal response to reflex perturbation, both because energy cost and movement errors are minimized. Energy is conserved because neither reflex-induced nor compensatory forces are generated. Precision is maximized because endpoint variance is proportional to force production. We find evidence against source-identified adaptation in both experiments, suggesting that sensory-motor information processing is not always optimal.

  11. Using Generalizability Theory to Disattenuate Correlation Coefficients for Multiple Sources of Measurement Error.

    Science.gov (United States)

    Vispoel, Walter P; Morris, Carrie A; Kilinc, Murat

    2018-05-02

    Over the years, research in the social sciences has been dominated by reporting of reliability coefficients that fail to account for key sources of measurement error. Use of these coefficients, in turn, to correct for measurement error can hinder scientific progress by misrepresenting true relationships among the underlying constructs being investigated. In the research reported here, we addressed these issues using generalizability theory (G-theory) in both traditional and new ways to account for the three key sources of measurement error (random-response, specific-factor, and transient) that affect scores from objectively scored measures. Results from 20 widely used measures of personality, self-concept, and socially desirable responding showed that conventional indices consistently misrepresented reliability and relationships among psychological constructs by failing to account for key sources of measurement error and correlated transient errors within occasions. The results further revealed that G-theory served as an effective framework for remedying these problems. We discuss possible extensions in future research and provide code from the computer package R in an online supplement to enable readers to apply the procedures we demonstrate to their own research.

  12. Imprecision in waggle dances of the honeybee (Apis mellifera) for nearby food sources : error or adaptation?

    OpenAIRE

    Weidenmüller, Anja; Seeley, Thomas

    1999-01-01

    A curious feature of the honeybee's waggle dance is the imprecision in the direction indication for nearby food sources. One hypothesis for the function of this imprecision is that it serves to spread recruits over a certain area and thus is an adaptation to the typical spatial configuration of the bees' food sources, i.e., flowers in sizable patches. We report an experiment that tests this tuned-error hypothesis. We measured the precision of direction indication in waggle dances advertising ...

  13. Source Memory Errors Associated with Reports of Posttraumatic Flashbacks: A Proof of Concept Study

    Science.gov (United States)

    Brewin, Chris R.; Huntley, Zoe; Whalley, Matthew G.

    2012-01-01

    Flashbacks are involuntary, emotion-laden images experienced by individuals with posttraumatic stress disorder (PTSD). The qualities of flashbacks could under certain circumstances lead to source memory errors. Participants with PTSD wrote a trauma narrative and reported the experience of flashbacks. They were later presented with stimuli from…

  14. Decoy-state quantum key distribution with both source errors and statistical fluctuations

    International Nuclear Information System (INIS)

    Wang Xiangbin; Yang Lin; Peng Chengzhi; Pan Jianwei

    2009-01-01

    We show how to calculate the fraction of single-photon counts of the 3-intensity decoy-state quantum cryptography faithfully with both statistical fluctuations and source errors. Our results rely only on the bound values of a few parameters of the states of pulses.

  15. the effect of current and relevant information sources on the use

    African Journals Online (AJOL)

    Admin

    reported similar findings at Yaba College of. Technology, Lagos. However, in a ... values. In other words, current information sources resulted in the use of the library. Jam (1992) identified lack of relevant information sources to be one of the problems facing library users and has ... Bachelor's degree holders. That those with.

  16. Utilising identifier error variation in linkage of large administrative data sources

    Directory of Open Access Journals (Sweden)

    Katie Harron

    2017-02-01

    Full Text Available Abstract Background Linkage of administrative data sources often relies on probabilistic methods using a set of common identifiers (e.g. sex, date of birth, postcode. Variation in data quality on an individual or organisational level (e.g. by hospital can result in clustering of identifier errors, violating the assumption of independence between identifiers required for traditional probabilistic match weight estimation. This potentially introduces selection bias to the resulting linked dataset. We aimed to measure variation in identifier error rates in a large English administrative data source (Hospital Episode Statistics; HES and to incorporate this information into match weight calculation. Methods We used 30,000 randomly selected HES hospital admissions records of patients aged 0–1, 5–6 and 18–19 years, for 2011/2012, linked via NHS number with data from the Personal Demographic Service (PDS; our gold-standard. We calculated identifier error rates for sex, date of birth and postcode and used multi-level logistic regression to investigate associations with individual-level attributes (age, ethnicity, and gender and organisational variation. We then derived: i weights incorporating dependence between identifiers; ii attribute-specific weights (varying by age, ethnicity and gender; and iii organisation-specific weights (by hospital. Results were compared with traditional match weights using a simulation study. Results Identifier errors (where values disagreed in linked HES-PDS records or missing values were found in 0.11% of records for sex and date of birth and in 53% of records for postcode. Identifier error rates differed significantly by age, ethnicity and sex (p < 0.0005. Errors were less frequent in males, in 5–6 year olds and 18–19 year olds compared with infants, and were lowest for the Asian ethic group. A simulation study demonstrated that substantial bias was introduced into estimated readmission rates in the presence

  17. Accuracy and Sources of Error for an Angle Independent Volume Flow Estimator

    DEFF Research Database (Denmark)

    Jensen, Jonas; Olesen, Jacob Bjerring; Hansen, Peter Møller

    2014-01-01

    This paper investigates sources of error for a vector velocity volume flow estimator. Quantification of the estima tor’s accuracy is performed theoretically and investigated in vivo . Womersley’s model for pulsatile flow is used to simulate velo city profiles and calculate volume flow errors....... A BK Medical UltraView 800 ultrasound scanner with a 9 MHz linear array transducer is used to obtain Vector Flow Imaging sequences of a superficial part of the fistulas. Cross-sectional diameters of each fistu la are measured on B-mode images by rotating the scan plane 90 degrees. The major axis...

  18. Source memory errors in schizophrenia, hallucinations and negative symptoms: a synthesis of research findings.

    Science.gov (United States)

    Brébion, G; Ohlsen, R I; Bressan, R A; David, A S

    2012-12-01

    Previous research has shown associations between source memory errors and hallucinations in patients with schizophrenia. We bring together here findings from a broad memory investigation to specify better the type of source memory failure that is associated with auditory and visual hallucinations. Forty-one patients with schizophrenia and 43 healthy participants underwent a memory task involving recall and recognition of lists of words, recognition of pictures, memory for temporal and spatial context of presentation of the stimuli, and remembering whether target items were presented as words or pictures. False recognition of words and pictures was associated with hallucination scores. The extra-list intrusions in free recall were associated with verbal hallucinations whereas the intra-list intrusions were associated with a global hallucination score. Errors in discriminating the temporal context of word presentation and the spatial context of picture presentation were associated with auditory hallucinations. The tendency to remember verbal labels of items as pictures of these items was associated with visual hallucinations. Several memory errors were also inversely associated with affective flattening and anhedonia. Verbal and visual hallucinations are associated with confusion between internal verbal thoughts or internal visual images and perception. In addition, auditory hallucinations are associated with failure to process or remember the context of presentation of the events. Certain negative symptoms have an opposite effect on memory errors.

  19. Unaccounted source of systematic errors in measurements of the Newtonian gravitational constant G

    International Nuclear Information System (INIS)

    DeSalvo, Riccardo

    2015-01-01

    Many precision measurements of G have produced a spread of results incompatible with measurement errors. Clearly an unknown source of systematic errors is at work. It is proposed here that most of the discrepancies derive from subtle deviations from Hooke's law, caused by avalanches of entangled dislocations. The idea is supported by deviations from linearity reported by experimenters measuring G, similarly to what is observed, on a larger scale, in low-frequency spring oscillators. Some mitigating experimental apparatus modifications are suggested. - Highlights: • Source of discrepancies on universal gravitational constant G measurements. • Collective motion of dislocations results in breakdown of Hook's law. • Self-organized criticality produce non-predictive shifts of equilibrium point. • New dissipation mechanism different from loss angle and viscous models is necessary. • Mitigation measures proposed may bring coherence to the measurements of G

  20. Longitudinal Cut Method Revisited: A Survey on the Main Error Sources

    OpenAIRE

    Moriconi, Alessandro; Lalli, Francesco; Di Felice, Fabio; Esposito, Pier Giorgio; Piscopia, Rodolfo

    2000-01-01

    Some of the main error sources in wave pattern resistance determination were investigated. The experimental data obtained at the Italian Ship Model Basin (longitudinal wave cuts concerned with the steady motion of the Series 60 model and a hard-chine catamaran) were analyzed. It was found that, within the range of Froude numbers tested (0.225 ≤ Fr ≤ 0.345 for the Series 60 and 0.5 ≤ Fr ≤ 1 for the catamaran) two sources of uncertainty play a significant role: (i) the p...

  1. Assessment of the sources of error affecting the quantitative accuracy of SPECT imaging in small animals

    Energy Technology Data Exchange (ETDEWEB)

    Joint Graduate Group in Bioengineering, University of California, San Francisco and University of California, Berkeley; Department of Radiology, University of California; Gullberg, Grant T; Hwang, Andrew B.; Franc, Benjamin L.; Gullberg, Grant T.; Hasegawa, Bruce H.

    2008-02-15

    Small animal SPECT imaging systems have multiple potential applications in biomedical research. Whereas SPECT data are commonly interpreted qualitatively in a clinical setting, the ability to accurately quantify measurements will increase the utility of the SPECT data for laboratory measurements involving small animals. In this work, we assess the effect of photon attenuation, scatter and partial volume errors on the quantitative accuracy of small animal SPECT measurements, first with Monte Carlo simulation and then confirmed with experimental measurements. The simulations modeled the imaging geometry of a commercially available small animal SPECT system. We simulated the imaging of a radioactive source within a cylinder of water, and reconstructed the projection data using iterative reconstruction algorithms. The size of the source and the size of the surrounding cylinder were varied to evaluate the effects of photon attenuation and scatter on quantitative accuracy. We found that photon attenuation can reduce the measured concentration of radioactivity in a volume of interest in the center of a rat-sized cylinder of water by up to 50percent when imaging with iodine-125, and up to 25percent when imaging with technetium-99m. When imaging with iodine-125, the scatter-to-primary ratio can reach up to approximately 30percent, and can cause overestimation of the radioactivity concentration when reconstructing data with attenuation correction. We varied the size of the source to evaluate partial volume errors, which we found to be a strong function of the size of the volume of interest and the spatial resolution. These errors can result in large (>50percent) changes in the measured amount of radioactivity. The simulation results were compared with and found to agree with experimental measurements. The inclusion of attenuation correction in the reconstruction algorithm improved quantitative accuracy. We also found that an improvement of the spatial resolution through the

  2. Correlation of errors in the Monte Carlo fission source and the fission matrix fundamental-mode eigenvector

    International Nuclear Information System (INIS)

    Dufek, Jan; Holst, Gustaf

    2016-01-01

    Highlights: • Errors in the fission matrix eigenvector and fission source are correlated. • The error correlations depend on coarseness of the spatial mesh. • The error correlations are negligible when the mesh is very fine. - Abstract: Previous studies raised a question about the level of a possible correlation of errors in the cumulative Monte Carlo fission source and the fundamental-mode eigenvector of the fission matrix. A number of new methods tally the fission matrix during the actual Monte Carlo criticality calculation, and use its fundamental-mode eigenvector for various tasks. The methods assume the fission matrix eigenvector is a better representation of the fission source distribution than the actual Monte Carlo fission source, although the fission matrix and its eigenvectors do contain statistical and other errors. A recent study showed that the eigenvector could be used for an unbiased estimation of errors in the cumulative fission source if the errors in the eigenvector and the cumulative fission source were not correlated. Here we present new numerical study results that answer the question about the level of the possible error correlation. The results may be of importance to all methods that use the fission matrix. New numerical tests show that the error correlation is present at a level which strongly depends on properties of the spatial mesh used for tallying the fission matrix. The error correlation is relatively strong when the mesh is coarse, while the correlation weakens as the mesh gets finer. We suggest that the coarseness of the mesh is measured in terms of the value of the largest element in the tallied fission matrix as that way accounts for the mesh as well as system properties. In our test simulations, we observe only negligible error correlations when the value of the largest element in the fission matrix is about 0.1. Relatively strong error correlations appear when the value of the largest element in the fission matrix raises

  3. Study of principle error sources in gamma spectrometry. Application to cross sections measurement

    International Nuclear Information System (INIS)

    Majah, M. Ibn.

    1985-01-01

    The principle error sources in gamma spectrometry have been studied in purpose to measure cross sections with great precision. Three error sources have been studied: dead time and pile up which depend on counting rate, and coincidence effect that depends on the disintegration scheme of the radionuclide in question. A constant frequency pulse generator has been used to correct the counting loss due to dead time and pile up in cases of long and short disintegration periods. The loss due to coincidence effect can reach 25% and over, depending on the disintegration scheme and on the distance source-detector. After establishing the correction formula and verifying its validity for four examples: iron 56, scandium 48, antimony 120 and gold 196 m, an application has been done by measuring cross sections of nuclear reactions that lead to long disintegration periods which need short distance source-detector counting and thus correcting the loss due to dead time effect, pile up and coincidence effect. 16 refs., 45 figs., 25 tabs. (author)

  4. Characterization of the main error sources of chromatic confocal probes for dimensional measurement

    International Nuclear Information System (INIS)

    Nouira, H; El-Hayek, N; Yuan, X; Anwer, N

    2014-01-01

    Chromatic confocal probes are increasingly used in high-precision dimensional metrology applications such as roughness, form, thickness and surface profile measurements; however, their measurement behaviour is not well understood and must be characterized at a nanometre level. This paper provides a calibration bench for the characterization of two chromatic confocal probes of 20 and 350 µm travel ranges. The metrology loop that includes the chromatic confocal probe is stable and enables measurement repeatability at the nanometer level. With the proposed system, the major error sources, such as the relative axial and radial motions of the probe with respect to the sample, the material, colour and roughness of the measured sample, the relative deviation/tilt of the probe and the scanning speed are identified. Experimental test results show that the chromatic confocal probes are sensitive to these errors and that their measurement behaviour is highly dependent on them. (paper)

  5. Measurement error in mobile source air pollution exposure estimates due to residential mobility during pregnancy.

    Science.gov (United States)

    Pennington, Audrey Flak; Strickland, Matthew J; Klein, Mitchel; Zhai, Xinxin; Russell, Armistead G; Hansen, Craig; Darrow, Lyndsey A

    2017-09-01

    Prenatal air pollution exposure is frequently estimated using maternal residential location at the time of delivery as a proxy for residence during pregnancy. We describe residential mobility during pregnancy among 19,951 children from the Kaiser Air Pollution and Pediatric Asthma Study, quantify measurement error in spatially resolved estimates of prenatal exposure to mobile source fine particulate matter (PM 2.5 ) due to ignoring this mobility, and simulate the impact of this error on estimates of epidemiologic associations. Two exposure estimates were compared, one calculated using complete residential histories during pregnancy (weighted average based on time spent at each address) and the second calculated using only residence at birth. Estimates were computed using annual averages of primary PM 2.5 from traffic emissions modeled using a Research LINE-source dispersion model for near-surface releases (RLINE) at 250 m resolution. In this cohort, 18.6% of children were born to mothers who moved at least once during pregnancy. Mobile source PM 2.5 exposure estimates calculated using complete residential histories during pregnancy and only residence at birth were highly correlated (r S >0.9). Simulations indicated that ignoring residential mobility resulted in modest bias of epidemiologic associations toward the null, but varied by maternal characteristics and prenatal exposure windows of interest (ranging from -2% to -10% bias).

  6. Sources of errors in the determination of fluorine in feeding stuffs

    Energy Technology Data Exchange (ETDEWEB)

    Oelschlaeger, W; Kirchgessner, M

    1960-01-01

    The difference between deficiency and toxicity levels of F in fodder is small; for this reason the many sources of error in the estimation of F contents are discussed. A list, and preventive measures suggested are included. Finally, detailed working instructions are given for accurate F analysis, and representative F contents of certain feeding stuffs are tabulated. A maximal permissible limit for dairy cattle of 2 - 3 mg F per day per kg body weight is suggested. F contents of plants growing near HF-producing plants especially downwind, are often dangerously high.

  7. Dye shift: a neglected source of genotyping error in molecular ecology.

    Science.gov (United States)

    Sutton, Jolene T; Robertson, Bruce C; Jamieson, Ian G

    2011-05-01

    Molecular ecologists must be vigilant in detecting and accounting for genotyping error, yet potential errors stemming from dye-induced mobility shift (dye shift) may be frequently neglected and largely unknown to researchers who employ 3-primer systems with automated genotyping. When left uncorrected, dye shift can lead to mis-scoring alleles and even to falsely calling new alleles if different dyes are used to genotype the same locus in subsequent reactions. When we used four different fluorophore labels from a standard dye set to genotype the same set of loci, differences in the resulting size estimates for a single allele ranged from 2.07 bp to 3.68 bp. The strongest effects were associated with the fluorophore PET, and relative degree of dye shift was inversely related to locus size. We found little evidence in the literature that dye shift is regularly accounted for in 3-primer studies, despite knowledge of this phenomenon existing for over a decade. However, we did find some references to erroneous standard correction factors for the same set of dyes that we tested. We thus reiterate the need for strict quality control when attempting to reduce possible sources of genotyping error, and in cases where different dyes are applied to a single locus, perhaps mistakenly, we strongly discourage researchers from assuming generic correction patterns. © 2011 Blackwell Publishing Ltd.

  8. Problems of accuracy and sources of error in trace analysis of elements

    International Nuclear Information System (INIS)

    Porat, Ze'ev.

    1995-07-01

    The technological developments in the field of analytical chemistry in recent years facilitates trace analysis of materials in sub-ppb levels. This provides important information regarding the presence of various trace elements in the human body, in drinking water and in the environment. However, it also exposes the measurements to more severe problems of contamination and inaccuracy due to the high sensitivity of the analytical methods. The sources of error are numerous and can be included in three main groups: (a) impurities of various sources; (b) loss of material during sample processing; (c) problems of calibration and interference. These difficulties are discussed here in detail, together with some practical solutions and examples.(authors) 8 figs., 2 tabs., 18 refs.,

  9. Problems of accuracy and sources of error in trace analysis of elements

    Energy Technology Data Exchange (ETDEWEB)

    Porat, Ze` ev

    1995-07-01

    The technological developments in the field of analytical chemistry in recent years facilitates trace analysis of materials in sub-ppb levels. This provides important information regarding the presence of various trace elements in the human body, in drinking water and in the environment. However, it also exposes the measurements to more severe problems of contamination and inaccuracy due to the high sensitivity of the analytical methods. The sources of error are numerous and can be included in three main groups: (a) impurities of various sources; (b) loss of material during sample processing; (c) problems of calibration and interference. These difficulties are discussed here in detail, together with some practical solutions and examples.(authors) 8 figs., 2 tabs., 18 refs.,.

  10. Error-source effects on the performance of direct and iterative algorithms on an optical matrix-vector processor

    Science.gov (United States)

    Perlee, Caroline J.; Casasent, David P.

    1990-09-01

    Error sources in an optical matrix-vector processor are analyzed in terms of their effect on the performance of the algorithms used to solve a set of nonlinear and linear algebraic equations. A direct and an iterative algorithm are used to solve a nonlinear time-dependent case-study from computational fluid dynamics. A simulator which emulates the data flow and number representation of the OLAP is used to studs? these error effects. The ability of each algorithm to tolerate or correct the error sources is quantified. These results are extended to the general case of solving nonlinear and linear algebraic equations on the optical system.

  11. Methodical assessment of all non-ionizing radiation sources that can provide a relevant contribution to public exposure. Final report

    International Nuclear Information System (INIS)

    Bornkessel, Christian; Schubert, Markus; Wuschek, Matthias; Brueggemeyer, Hauke; Weiskopf, Daniela

    2011-01-01

    The aim of the research project was to systematically identify artificial sources on non-ionizing radiation (electric, magnetic or electromagnetic fields in a frequency range from 0 Hz to 300 GHz, as well optical radiation in a wavelength range from 100 nm to 1 mm), that have relevant contribution to public exposure. The report includes the following chapters: (1) Concept for the relevance assessment for non-ionizing radiation sources; (2) concept for the systematic identification of sources from establishes technologies; (3) concept for the systematic identification of sources from new or foreseeable technologies; (4)overview of relevant radiation sources.

  12. From the Lab to the real world : sources of error in UF6 gas enrichment monitoring

    International Nuclear Information System (INIS)

    Lombardi, Marcie L.

    2012-01-01

    monitors have required empty pipe measurements to accurately determine the pipe attenuation (the pipe attenuation is typically much larger than the attenuation in the gas). This dissertation reports on a method for determining the thickness of a pipe in a GCEP when obtaining an empty pipe measurement may not be feasible. This dissertation studies each of the components that may add to the final error in the enrichment measurement, and the factors that were taken into account to mitigate these issues are also detailed and tested. The use of an x-ray generator as a transmission source and the attending stability issues are addressed. Both analytical calculations and experimental measurements have been used. For completeness, some real-world analysis results from the URENCO Capenhurst enrichment plant have been included, where the final enrichment error has remained well below 1% for approximately two months

  13. Overview of sources of radioactive particles of Nordic relevance as well as a short description of available particle characterisation techniques

    Energy Technology Data Exchange (ETDEWEB)

    Lind, O.C.; Salbu, B. (Norwegian Univ. of Life Sciences (Norway)); Nygren, U.; Thaning, L.; Ramebaeck, H. (Swedish Defense Research Agency (FOI) (Sweden)); Sidhu, S. (Inst. for Energy Technology (Norway)); Roos, P. (Technical Univ. of Denmark. Risoe DTU, Roskilde (Denmark)); Poellaenen, R. (STUK (Finland)); Ranebo, Y.; Holm, E. (Univ. Lund (Sweden))

    2008-10-15

    The present overview report show that there are many existing and potential sources of radioactive particle contamination of relevance to the Nordic countries. Following their release, radioactive particles represent point sources of short- and long-term radioecological significance, and the failure to recognise their presence may lead to significant errors in the short- and long-term impact assessments related to radioactive contamination at a particular site. Thus, there is a need of knowledge with respect to the probability, quantity and expected impact of radioactive particle formation and release in case of specified potential nuclear events (e.g. reactor accident or nuclear terrorism). Furthermore, knowledge with respect to the particle characteristics influencing transport, ecosystem transfer and biological effects is important. In this respect, it should be noted that an IAEA coordinated research project was running from 2000-2006 (IAEA CRP, 2001) focussing on characterisation and environmental impact of radioactive particles, while a new IAEA CRP focussing on the biological effects of radioactive particles will be launched in 2008. (author)

  14. Overview of sources of radioactive particles of Nordic relevance as well as a short description of available particle characterisation techniques

    International Nuclear Information System (INIS)

    Lind, O.C.; Salbu, B.; Nygren, U.; Thaning, L.; Ramebaeck, H.; Sidhu, S.; Roos, P.; Poellaenen, R.; Ranebo, Y.; Holm, E.

    2008-10-01

    The present overview report show that there are many existing and potential sources of radioactive particle contamination of relevance to the Nordic countries. Following their release, radioactive particles represent point sources of short- and long-term radioecological significance, and the failure to recognise their presence may lead to significant errors in the short- and long-term impact assessments related to radioactive contamination at a particular site. Thus, there is a need of knowledge with respect to the probability, quantity and expected impact of radioactive particle formation and release in case of specified potential nuclear events (e.g. reactor accident or nuclear terrorism). Furthermore, knowledge with respect to the particle characteristics influencing transport, ecosystem transfer and biological effects is important. In this respect, it should be noted that an IAEA coordinated research project was running from 2000-2006 (IAEA CRP, 2001) focussing on characterisation and environmental impact of radioactive particles, while a new IAEA CRP focussing on the biological effects of radioactive particles will be launched in 2008. (au)

  15. Campylobacter species in animal, food, and environmental sources, and relevant testing programs in Canada.

    Science.gov (United States)

    Huang, Hongsheng; Brooks, Brian W; Lowman, Ruff; Carrillo, Catherine D

    2015-10-01

    Campylobacter species, particularly thermophilic campylobacters, have emerged as a leading cause of human foodborne gastroenteritis worldwide, with Campylobacter jejuni, Campylobacter coli, and Campylobacter lari responsible for the majority of human infections. Although most cases of campylobacteriosis are self-limiting, campylobacteriosis represents a significant public health burden. Human illness caused by infection with campylobacters has been reported across Canada since the early 1970s. Many studies have shown that dietary sources, including food, particularly raw poultry and other meat products, raw milk, and contaminated water, have contributed to outbreaks of campylobacteriosis in Canada. Campylobacter spp. have also been detected in a wide range of animal and environmental sources, including water, in Canada. The purpose of this article is to review (i) the prevalence of Campylobacter spp. in animals, food, and the environment, and (ii) the relevant testing programs in Canada with a focus on the potential links between campylobacters and human health in Canada.

  16. Parameter optimization in biased decoy-state quantum key distribution with both source errors and statistical fluctuations

    Science.gov (United States)

    Zhu, Jian-Rong; Li, Jian; Zhang, Chun-Mei; Wang, Qin

    2017-10-01

    The decoy-state method has been widely used in commercial quantum key distribution (QKD) systems. In view of the practical decoy-state QKD with both source errors and statistical fluctuations, we propose a universal model of full parameter optimization in biased decoy-state QKD with phase-randomized sources. Besides, we adopt this model to carry out simulations of two widely used sources: weak coherent source (WCS) and heralded single-photon source (HSPS). Results show that full parameter optimization can significantly improve not only the secure transmission distance but also the final key generation rate. And when taking source errors and statistical fluctuations into account, the performance of decoy-state QKD using HSPS suffered less than that of decoy-state QKD using WCS.

  17. Summary of mirror experiments relevant to beam-plasma neutron source

    International Nuclear Information System (INIS)

    Molvik, A.W.

    1988-01-01

    A promising design for a deuterium-tritium (DT) neutron source is based on the injection of neutral beams into a dense, warm plasma column. Its purpose is to test materials for possible use in fusion reactors. A series of designs have evolved, from a 4-T version to an 8-T version. Intense fluxes of 5--10 MW/m 2 is achieved at the plasma surface, sufficient to complete end-of-life tests in one to two years. In this report, we review data from earlier mirror experiments that are relevant to such neutron sources. Most of these data are from 2XIIB, which was the only facility to ever inject 5 MW of neutral beams into a single mirror call. The major physics issues for a beam-plasma neutron source are magnetohydrodynamic (MHD) equilibrium and stability, microstability, startup, cold-ion fueling of the midplane to allow two-component reactions, and operation in the Spitzer conduction regime, where the power is removed to the ends by an axial gradient in the electron temperature T/sub e/. We show in this report that the conditions required for a neutron source have now been demonstrated in experiments. 20 refs., 15 figs., 3 tabs

  18. Use and perceptions of information among family physicians: sources considered accessible, relevant, and reliable.

    Science.gov (United States)

    Kosteniuk, Julie G; Morgan, Debra G; D'Arcy, Carl K

    2013-01-01

    The research determined (1) the information sources that family physicians (FPs) most commonly use to update their general medical knowledge and to make specific clinical decisions, and (2) the information sources FPs found to be most physically accessible, intellectually accessible (easy to understand), reliable (trustworthy), and relevant to their needs. A cross-sectional postal survey of 792 FPs and locum tenens, in full-time or part-time medical practice, currently practicing or on leave of absence in the Canadian province of Saskatchewan was conducted during the period of January to April 2008. Of 666 eligible physicians, 331 completed and returned surveys, resulting in a response rate of 49.7% (331/666). Medical textbooks and colleagues in the main patient care setting were the top 2 sources for the purpose of making specific clinical decisions. Medical textbooks were most frequently considered by FPs to be reliable (trustworthy), and colleagues in the main patient care setting were most physically accessible (easy to access). When making specific clinical decisions, FPs were most likely to use information from sources that they considered to be reliable and generally physically accessible, suggesting that FPs can best be supported by facilitating easy and convenient access to high-quality information.

  19. Computational Benchmark Calculations Relevant to the Neutronic Design of the Spallation Neutron Source (SNS)

    International Nuclear Information System (INIS)

    Gallmeier, F.X.; Glasgow, D.C.; Jerde, E.A.; Johnson, J.O.; Yugo, J.J.

    1999-01-01

    The Spallation Neutron Source (SNS) will provide an intense source of low-energy neutrons for experimental use. The low-energy neutrons are produced by the interaction of a high-energy (1.0 GeV) proton beam on a mercury (Hg) target and slowed down in liquid hydrogen or light water moderators. Computer codes and computational techniques are being benchmarked against relevant experimental data to validate and verify the tools being used to predict the performance of the SNS. The LAHET Code System (LCS), which includes LAHET, HTAPE ad HMCNP (a modified version of MCNP version 3b), have been applied to the analysis of experiments that were conducted in the Alternating Gradient Synchrotron (AGS) facility at Brookhaven National Laboratory (BNL). In the AGS experiments, foils of various materials were placed around a mercury-filled stainless steel cylinder, which was bombarded with protons at 1.6 GeV. Neutrons created in the mercury target, activated the foils. Activities of the relevant isotopes were accurately measured and compared with calculated predictions. Measurements at BNL were provided in part by collaborating scientists from JAERI as part of the AGS Spallation Target Experiment (ASTE) collaboration. To date, calculations have shown good agreement with measurements

  20. On the group approximation errors in description of neutron slowing-down at large distances from a source. Diffusion approach

    International Nuclear Information System (INIS)

    Kulakovskij, M.Ya.; Savitskij, V.I.

    1981-01-01

    The errors of multigroup calculating the neutron flux spatial and energy distribution in the fast reactor shield caused by using group and age approximations are considered. It is shown that at small distances from a source the age theory rather well describes the distribution of the slowing-down density. With the distance increase the age approximation leads to underestimating the neutron fluxes, and the error quickly increases at that. At small distances from the source (up to 15 lengths of free path in graphite) the multigroup diffusion approximation describes the distribution of slowing down density quite satisfactorily and at that the results almost do not depend on the number of groups. With the distance increase the multigroup diffusion calculations lead to considerable overestimating of the slowing-down density. The conclusion is drawn that the group approximation proper errors are opposite in sign to the error introduced by the age approximation and to some extent compensate each other

  1. A Critical Review of Naphthalene Sources and Exposures Relevant to Indoor and Outdoor Air

    Directory of Open Access Journals (Sweden)

    Chunrong Jia

    2010-07-01

    Full Text Available Both the recent classification of naphthalene as a possible human carcinogen and its ubiquitous presence motivate this critical review of naphthalene’s sources and exposures. We evaluate the environmental literature on naphthalene published since 1990, drawing on nearly 150 studies that report emissions and concentrations in indoor, outdoor and personal air. While naphthalene is both a volatile organic compound and a polycyclic aromatic hydrocarbon, concentrations and exposures are poorly characterized relative to many other pollutants. Most airborne emissions result from combustion, and key sources include industry, open burning, tailpipe emissions, and cigarettes. The second largest source is off-gassing, specifically from naphthalene’s use as a deodorizer, repellent and fumigant. In the U.S., naphthalene’s use as a moth repellant has been reduced in favor of para-dichlorobenzene, but extensive use continues in mothballs, which appears responsible for some of the highest indoor exposures, along with off-label uses. Among the studies judged to be representative, average concentrations ranged from 0.18 to 1.7 μg m-3 in non-smoker’s homes, and from 0.02 to 0.31 μg m-3 outdoors in urban areas. Personal exposures have been reported in only three European studies. Indoor sources are the major contributor to (non-occupational exposure. While its central tendencies fall well below guideline levels relevant to acute health impacts, several studies have reported maximum concentrations exceeding 100 μg m-3, far above guideline levels. Using current but draft estimates of cancer risks, naphthalene is a major environmental risk driver, with typical individual risk levels in the 10-4 range, which is high and notable given that millions of individuals are exposed. Several factors influence indoor and outdoor concentrations, but the literature is inconsistent on their effects. Further investigation is needed to better characterize naphthalene

  2. Sources of water vapor to economically relevant regions in Amazonia and the effect of deforestation

    Science.gov (United States)

    Pires, G. F.; Fontes, V. C.

    2017-12-01

    The Amazon rain forest helps regulate the regional humid climate. Understanding the effects of Amazon deforestation is important to preserve not only the climate, but also economic activities that depend on it, in particular, agricultural productivity and hydropower generation. This study calculates the source of water vapor contributing to the precipitation on economically relevant regions in Amazonia according to different scenarios of deforestation. These regions include the state of Mato Grosso, which produces about 9% of the global soybean production, and the basins of the Xingu and Madeira, with infrastructure under construction that will be capable to generate 20% of the electrical energy produced in Brazil. The results show that changes in rainfall after deforestation are stronger in regions nearest to the ocean and indicate the importance of the continental water vapor source to the precipitation over southern Amazonia. In the two more continental regions (Madeira and Mato Grosso), decreases in the source of water vapor in one region were offset by increases in contributions from other continental regions, whereas in the Xingu basin, which is closer to the ocean, this mechanism did not occur. As a conclusion, the geographic location of the region is an important determinant of the resiliency of the regional climate to deforestation-induced regional climate change. The more continental the geographic location, the less climate changes after deforestation.

  3. Sensor Interaction as a Source of the Electromagnetic Field Measurement Error

    Directory of Open Access Journals (Sweden)

    Hartansky R.

    2014-12-01

    Full Text Available The article deals with analytical calculation and numerical simulation of interactive influence of electromagnetic sensors. Sensors are components of field probe, whereby their interactive influence causes the measuring error. Electromagnetic field probe contains three mutually perpendicular spaced sensors in order to measure the vector of electrical field. Error of sensors is enumerated with dependence on interactive position of sensors. Based on that, proposed were recommendations for electromagnetic field probe construction to minimize the sensor interaction and measuring error.

  4. Human error as a source of disturbances in Swedish nuclear power plants

    International Nuclear Information System (INIS)

    Sokolowski, E.

    1985-01-01

    Events involving human errors at the Swedish nuclear power plants are registered and periodically analyzed. The philosophy behind the scheme for data collection and analysis is discussed. Human errors cause about 10% of the disturbances registered. Only a small part of these errors are committed by operators in the control room. These and other findings differ from those in other countries. Possible reasons are put forward

  5. Connecting Organic Aerosol Climate-Relevant Properties to Chemical Mechanisms of Sources and Processing

    Energy Technology Data Exchange (ETDEWEB)

    Thornton, Joel [Univ. of Washington, Seattle, WA (United States)

    2015-01-26

    The research conducted on this project aimed to improve our understanding of secondary organic aerosol (SOA) formation in the atmosphere, and how the properties of the SOA impact climate through its size, phase state, and optical properties. The goal of this project was to demonstrate that the use of molecular composition information to mechanistically connect source apportionment and climate properties can improve the physical basis for simulation of SOA formation and properties in climate models. The research involved developing and improving methods to provide online measurements of the molecular composition of SOA under atmospherically relevant conditions and to apply this technology to controlled simulation chamber experiments and field measurements. The science we have completed with the methodology will impact the simulation of aerosol particles in climate models.

  6. IFMIF, a fusion relevant neutron source for material irradiation current status

    International Nuclear Information System (INIS)

    Knaster, J.; Chel, S.; Fischer, U.; Groeschel, F.; Heidinger, R.; Ibarra, A.; Micciche, G.; Möslang, A.; Sugimoto, M.; Wakai, E.

    2014-01-01

    The d-Li based International Fusion Materials Irradiation Facility (IFMIF) will provide a high neutron intensity neutron source with a suitable neutron spectrum to fulfil the requirements for testing and qualifying fusion materials under fusion reactor relevant irradiation conditions. The IFMIF project, presently in its Engineering Validation and Engineering Design Activities (EVEDA) phase under the Broader Approach (BA) Agreement between Japan Government and EURATOM, aims at the construction and testing of the most challenging facility sub-systems, such as the first accelerator stage, the Li target and loop, and irradiation test modules, as well as the design of the entire facility, thus to be ready for the IFMIF construction with a clear understanding of schedule and cost at the termination of the BA mid-2017. The paper reviews the IFMIF facility and its principles, and reports on the status of the EVEDA activities and achievements

  7. Phylogeny and source climate impact seed dormancy and germination of restoration-relevant forb species.

    Science.gov (United States)

    Seglias, Alexandra E; Williams, Evelyn; Bilge, Arman; Kramer, Andrea T

    2018-01-01

    For many species and seed sources used in restoration activities, specific seed germination requirements are often unknown. Because seed dormancy and germination traits can be constrained by phylogenetic history, related species are often assumed to have similar traits. However, significant variation in these traits is also present within species as a result of adaptation to local climatic conditions. A growing number of studies have attempted to disentangle how phylogeny and climate influence seed dormancy and germination traits, but they have focused primarily on species-level effects, ignoring potential population-level variation. We examined the relationships between phylogeny, climate, and seed dormancy and germination traits for 24 populations of eight native, restoration-relevant forb species found in a wide range of climatic conditions in the Southwest United States. The seeds were exposed to eight temperature and stratification length regimes designed to mimic regional climatic conditions. Phylogenetic relatedness, overall climatic conditions, and temperature conditions at the site were all significantly correlated with final germination response, with significant among-population variation in germination response across incubation treatments for seven of our eight study species. Notably, germination during stratification was significantly predicted by precipitation seasonality and differed significantly among populations for seven species. While previous studies have not examined germination during stratification as a potential trait influencing overall germination response, our results suggest that this trait should be included in germination studies as well as seed sourcing decisions. Results of this study deepen our understanding of the relationships between source climate, species identity, and germination, leading to improved seed sourcing decisions for restorations.

  8. Towards a realistic 3D simulation of the extraction region in ITER NBI relevant ion source

    Science.gov (United States)

    Mochalskyy, S.; Wünderlich, D.; Fantz, U.; Franzen, P.; Minea, T.

    2015-03-01

    The development of negative ion (NI) sources for ITER is strongly accompanied by modelling activities. The ONIX code addresses the physics of formation and extraction of negative hydrogen ions at caesiated sources as well as the amount of co-extracted electrons. In order to be closer to the experimental conditions the code has been improved. It includes now the bias potential applied to first grid (plasma grid) of the extraction system, and the presence of Cs+ ions in the plasma. The simulation results show that such aspects play an important role for the formation of an ion-ion plasma in the boundary region by reducing the depth of the negative potential well in vicinity to the plasma grid that limits the extraction of the NIs produced at the Cs covered plasma grid surface. The influence of the initial temperature of the surface produced NI and its emission rate on the NI density in the bulk plasma that in turn affects the beam formation region was analysed. The formation of the plasma meniscus, the boundary between the plasma and the beam, was investigated for the extraction potentials of 5 and 10 kV. At the smaller extraction potential the meniscus moves closer to the plasma grid but as in the case of 10 kV the deepest meniscus bend point is still outside of the aperture. Finally, a plasma containing the same amount of NI and electrons (nH- =ne =1017 m-3) , representing good source conditioning, was simulated. It is shown that at such conditions the extracted NI current can reach values of ˜32 mA cm-2 using ITER-relevant extraction potential of 10 kV and ˜19 mA cm-2 at 5 kV. These results are in good agreement with experimental measurements performed at the small scale ITER prototype source at the test facility BATMAN.

  9. Towards a realistic 3D simulation of the extraction region in ITER NBI relevant ion source

    International Nuclear Information System (INIS)

    Mochalskyy, S.; Wünderlich, D.; Fantz, U.; Franzen, P.; Minea, T.

    2015-01-01

    The development of negative ion (NI) sources for ITER is strongly accompanied by modelling activities. The ONIX code addresses the physics of formation and extraction of negative hydrogen ions at caesiated sources as well as the amount of co-extracted electrons. In order to be closer to the experimental conditions the code has been improved. It includes now the bias potential applied to first grid (plasma grid) of the extraction system, and the presence of Cs + ions in the plasma. The simulation results show that such aspects play an important role for the formation of an ion–ion plasma in the boundary region by reducing the depth of the negative potential well in vicinity to the plasma grid that limits the extraction of the NIs produced at the Cs covered plasma grid surface. The influence of the initial temperature of the surface produced NI and its emission rate on the NI density in the bulk plasma that in turn affects the beam formation region was analysed. The formation of the plasma meniscus, the boundary between the plasma and the beam, was investigated for the extraction potentials of 5 and 10 kV. At the smaller extraction potential the meniscus moves closer to the plasma grid but as in the case of 10 kV the deepest meniscus bend point is still outside of the aperture. Finally, a plasma containing the same amount of NI and electrons (n H − =n e =10 17 m −3 ), representing good source conditioning, was simulated. It is shown that at such conditions the extracted NI current can reach values of ∼32 mA cm −2 using ITER-relevant extraction potential of 10 kV and ∼19 mA cm −2 at 5 kV. These results are in good agreement with experimental measurements performed at the small scale ITER prototype source at the test facility BATMAN. (paper)

  10. On the Source of the Systematic Errors in the Quatum Mechanical Calculation of the Superheavy Elements

    Directory of Open Access Journals (Sweden)

    Khazan A.

    2010-10-01

    Full Text Available It is shown that only the hyperbolic law of the Periodic Table of Elements allows the exact calculation for the atomic masses. The reference data of Periods 8 and 9 manifest a systematic error in the computer software applied to such a calculation (this systematic error increases with the number of the elements in the Table.

  11. On the Source of the Systematic Errors in the Quantum Mechanical Calculation of the Superheavy Elements

    Directory of Open Access Journals (Sweden)

    Khazan A.

    2010-10-01

    Full Text Available It is shown that only the hyperbolic law of the Periodic Table of Elements allows the exact calculation for the atomic masses. The reference data of Periods 8 and 9 manifest a systematic error in the computer software applied to such a calculation (this systematic error increases with the number of the elements in the Table.

  12. Size, Composition, and Sources of Health Relevant Particulate Matter in the San Joaquin Valley

    Science.gov (United States)

    Ham, Walter Allan

    Particulate Matter (PM) is an environment contaminant that has been associated with adverse health effects in epidemiological and toxicological studies. Atmospheric PM is made up of a diverse array of chemical species that are emitted from multiple sources across a range of aerodynamic diameters spanning several orders of magnitude. The focus of the present work was the characterization of ambient PM with aerodynamic diameters below 1.8 mum (PM1.8) in 6 size sub-fractions including PM0.1. Chemical species measured included organic carbon, elemental carbon, water soluble ions, trace metals, and organic molecular markers in urban and rural environments in the San Joaquin Valley. These measurements were used to determine differences in relative diurnal size distributions during a severe winter stagnation event, seasonal changes in PM size and composition, and the source origin of carbonaceous PM. This size-resolved information was used to calculate lung deposition patterns of health relevant PM species to evaluate seasonal differences in PM dose. By accurately calculating PM dose, researchers are able to more directly link ambient PM characterization data with biological endpoints. All of these results are used to support ongoing toxicological health effects studies. These types of analyses are important as this type of information may assist regulators with developing control strategies to reduce health effects caused by particulate air pollution.

  13. Emissions of perfluorinated alkylated substances (PFAS) from point sources--identification of relevant branches.

    Science.gov (United States)

    Clara, M; Scheffknecht, C; Scharf, S; Weiss, S; Gans, O

    2008-01-01

    Effluents of wastewater treatment plants are relevant point sources for the emission of hazardous xenobiotic substances to the aquatic environment. One group of substances, which recently entered scientific and political discussions, is the group of the perfluorinated alkylated substances (PFAS). The most studied compounds from this group are perfluorooctanoic acid (PFOA) and perfluorooctane sulphonate (PFOS), which are the most important degradation products of PFAS. These two substances are known to be persistent, bioaccumulative and toxic (PBT). In the present study, eleven PFAS were investigated in effluents of municipal wastewater treatment plants (WWTP) and in industrial wastewaters. PFOS and PFOA proved to be the dominant compounds in all sampled wastewaters. Concentrations of up to 340 ng/L of PFOS and up to 220 ng/L of PFOA were observed. Besides these two compounds, perfluorohexanoic acid (PFHxA) was also present in nearly all effluents and maximum concentrations of up to 280 ng/L were measured. Only N-ethylperfluorooctane sulphonamide (N-EtPFOSA) and its degradation/metabolisation product perfluorooctane sulphonamide (PFOSA) were either detected below the limit of quantification or were not even detected at all. Beside the effluents of the municipal WWTPs, nine industrial wastewaters from six different industrial branches were also investigated. Significantly, the highest emissions or PFOS were observed from metal industry whereas paper industry showed the highest PFOA emission. Several PFAS, especially perfluorononanoic acid (PFNA), perfluorodecanoic acid (PFDA), perfluorododecanoic acid (PFDoA) and PFOS are predominantly emitted from industrial sources, with concentrations being a factor of 10 higher than those observed in the municipal WWTP effluents. Perfluorodecane sulphonate (PFDS), N-Et-PFOSA and PFOSA were not detected in any of the sampled industrial point sources. (c) IWA Publishing 2008.

  14. Explicit control of image noise and error properties in cone-beam microtomography using dual concentric circular source loci

    International Nuclear Information System (INIS)

    Davis, Graham

    2005-01-01

    Cone-beam reconstruction from projections with a circular source locus (relative to the specimen) is commonly used in X-ray microtomography systems. Although this method does not provide an 'exact' reconstruction, since there is insufficient data in the projections, the approximation is considered adequate for many purposes. However, some specimens, with sharp changes in X-ray attenuation in the direction of the rotation axis, are particularly prone to cone-beam-related errors. These errors can be reduced by increasing the source-to-specimen distance, but at the expense of reduced signal-to-noise ratio or increased scanning time. An alternative method, based on heuristic arguments, is to scan the specimen with both short and long source-to-specimen distances and combine high frequency components from the former reconstruction with low frequency ones from the latter. This composite reconstruction has the low noise characteristics of the short source-to-specimen reconstruction and the low cone-beam errors of the long one. This has been tested with simulated data representing a particularly error prone specimen

  15. Temporal dynamics of conflict monitoring and the effects of one or two conflict sources on error-(related) negativity.

    Science.gov (United States)

    Armbrecht, Anne-Simone; Wöhrmann, Anne; Gibbons, Henning; Stahl, Jutta

    2010-09-01

    The present electrophysiological study investigated the temporal development of response conflict and the effects of diverging conflict sources on error(-related) negativity (Ne). Eighteen participants performed a combined stop-signal flanker task, which was comprised of two different conflict sources: a left-right and a go-stop response conflict. It is assumed that the Ne reflects the activity of a conflict monitoring system and thus increases according to (i) the number of conflict sources and (ii) the temporal development of the conflict activity. No increase of the Ne amplitude after double errors (comprising two conflict sources) as compared to hand- and stop-errors (comprising one conflict source) was found, whereas a higher Ne amplitude was observed after a delayed stop-signal onset. The results suggest that the Ne is not sensitive to an increase in the number of conflict sources, but to the temporal dynamics of a go-stop response conflict. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  16. Investigation of error sources in regional inverse estimates of greenhouse gas emissions in Canada

    Science.gov (United States)

    Chan, E.; Chan, D.; Ishizawa, M.; Vogel, F.; Brioude, J.; Delcloo, A.; Wu, Y.; Jin, B.

    2015-08-01

    Inversion models can use atmospheric concentration measurements to estimate surface fluxes. This study is an evaluation of the errors in a regional flux inversion model for different provinces of Canada, Alberta (AB), Saskatchewan (SK) and Ontario (ON). Using CarbonTracker model results as the target, the synthetic data experiment analyses examined the impacts of the errors from the Bayesian optimisation method, prior flux distribution and the atmospheric transport model, as well as their interactions. The scaling factors for different sub-regions were estimated by the Markov chain Monte Carlo (MCMC) simulation and cost function minimization (CFM) methods. The CFM method results are sensitive to the relative size of the assumed model-observation mismatch and prior flux error variances. Experiment results show that the estimation error increases with the number of sub-regions using the CFM method. For the region definitions that lead to realistic flux estimates, the numbers of sub-regions for the western region of AB/SK combined and the eastern region of ON are 11 and 4 respectively. The corresponding annual flux estimation errors for the western and eastern regions using the MCMC (CFM) method are -7 and -3 % (0 and 8 %) respectively, when there is only prior flux error. The estimation errors increase to 36 and 94 % (40 and 232 %) resulting from transport model error alone. When prior and transport model errors co-exist in the inversions, the estimation errors become 5 and 85 % (29 and 201 %). This result indicates that estimation errors are dominated by the transport model error and can in fact cancel each other and propagate to the flux estimates non-linearly. In addition, it is possible for the posterior flux estimates having larger differences than the prior compared to the target fluxes, and the posterior uncertainty estimates could be unrealistically small that do not cover the target. The systematic evaluation of the different components of the inversion

  17. Estimation of distance error by fuzzy set theory required for strength determination of HDR (192)Ir brachytherapy sources.

    Science.gov (United States)

    Kumar, Sudhir; Datta, D; Sharma, S D; Chourasiya, G; Babu, D A R; Sharma, D N

    2014-04-01

    Verification of the strength of high dose rate (HDR) (192)Ir brachytherapy sources on receipt from the vendor is an important component of institutional quality assurance program. Either reference air-kerma rate (RAKR) or air-kerma strength (AKS) is the recommended quantity to specify the strength of gamma-emitting brachytherapy sources. The use of Farmer-type cylindrical ionization chamber of sensitive volume 0.6 cm(3) is one of the recommended methods for measuring RAKR of HDR (192)Ir brachytherapy sources. While using the cylindrical chamber method, it is required to determine the positioning error of the ionization chamber with respect to the source which is called the distance error. An attempt has been made to apply the fuzzy set theory to estimate the subjective uncertainty associated with the distance error. A simplified approach of applying this fuzzy set theory has been proposed in the quantification of uncertainty associated with the distance error. In order to express the uncertainty in the framework of fuzzy sets, the uncertainty index was estimated and was found to be within 2.5%, which further indicates that the possibility of error in measuring such distance may be of this order. It is observed that the relative distance li estimated by analytical method and fuzzy set theoretic approach are consistent with each other. The crisp values of li estimated using analytical method lie within the bounds computed using fuzzy set theory. This indicates that li values estimated using analytical methods are within 2.5% uncertainty. This value of uncertainty in distance measurement should be incorporated in the uncertainty budget, while estimating the expanded uncertainty in HDR (192)Ir source strength measurement.

  18. Improvement of spatial discretization error on the semi-analytic nodal method using the scattered source subtraction method

    International Nuclear Information System (INIS)

    Yamamoto, Akio; Tatsumi, Masahiro

    2006-01-01

    In this paper, the scattered source subtraction (SSS) method is newly proposed to improve the spatial discretization error of the semi-analytic nodal method with the flat-source approximation. In the SSS method, the scattered source is subtracted from both side of the diffusion or the transport equation to make spatial variation of the source term to be small. The same neutron balance equation is still used in the SSS method. Since the SSS method just modifies coefficients of node coupling equations (those used in evaluation for the response of partial currents), its implementation is easy. Validity of the present method is verified through test calculations that are carried out in PWR multi-assemblies configurations. The calculation results show that the SSS method can significantly improve the spatial discretization error. Since the SSS method does not have any negative impact on execution time, convergence behavior and memory requirement, it will be useful to reduce the spatial discretization error of the semi-analytic nodal method with the flat-source approximation. (author)

  19. Analysis of strain error sources in micro-beam Laue diffraction

    International Nuclear Information System (INIS)

    Hofmann, Felix; Eve, Sophie; Belnoue, Jonathan; Micha, Jean-Sébastien; Korsunsky, Alexander M.

    2011-01-01

    Micro-beam Laue diffraction is an experimental method that allows the measurement of local lattice orientation and elastic strain within individual grains of engineering alloys, ceramics, and other polycrystalline materials. Unlike other analytical techniques, e.g. based on electron microscopy, it is not limited to surface characterisation or thin sections, but rather allows non-destructive measurements in the material bulk. This is of particular importance for in situ loading experiments where the mechanical response of a material volume (rather than just surface) is studied and it is vital that no perturbation/disturbance is introduced by the measurement technique. Whilst the technique allows lattice orientation to be determined to a high level of precision, accurate measurement of elastic strains and estimating the errors involved is a significant challenge. We propose a simulation-based approach to assess the elastic strain errors that arise from geometrical perturbations of the experimental setup. Using an empirical combination rule, the contributions of different geometrical uncertainties to the overall experimental strain error are estimated. This approach was applied to the micro-beam Laue diffraction setup at beamline BM32 at the European Synchrotron Radiation Facility (ESRF). Using a highly perfect germanium single crystal, the mechanical stability of the instrument was determined and hence the expected strain errors predicted. Comparison with the actual strain errors found in a silicon four-point beam bending test showed good agreement. The simulation-based error analysis approach makes it possible to understand the origins of the experimental strain errors and thus allows a directed improvement of the experimental geometry to maximise the benefit in terms of strain accuracy.

  20. Energy-Water Nexus Relevant to Baseload Electricity Source Including Mini/Micro Hydropower Generation

    Science.gov (United States)

    Fujii, M.; Tanabe, S.; Yamada, M.

    2014-12-01

    Water, food and energy is three sacred treasures that are necessary for human beings. However, recent factors such as population growth and rapid increase in energy consumption have generated conflicting cases between water and energy. For example, there exist conflicts caused by enhanced energy use, such as between hydropower generation and riverine ecosystems and service water, between shale gas and ground water, between geothermal and hot spring water. This study aims to provide quantitative guidelines necessary for capacity building among various stakeholders to minimize water-energy conflicts in enhancing energy use. Among various kinds of renewable energy sources, we target baseload sources, especially focusing on renewable energy of which installation is required socially not only to reduce CO2 and other greenhouse gas emissions but to stimulate local economy. Such renewable energy sources include micro/mini hydropower and geothermal. Three municipalities in Japan, Beppu City, Obama City and Otsuchi Town are selected as primary sites of this study. Based on the calculated potential supply and demand of micro/mini hydropower generation in Beppu City, for example, we estimate the electricity of tens through hundreds of households is covered by installing new micro/mini hydropower generation plants along each river. However, the result is based on the existing infrastructures such as roads and electric lines. This means that more potentials are expected if the local society chooses options that enhance the infrastructures to increase micro/mini hydropower generation plants. In addition, further capacity building in the local society is necessary. In Japan, for example, regulations by the river law and irrigation right restrict new entry by actors to the river. Possible influences to riverine ecosystems in installing new micro/mini hydropower generation plants should also be well taken into account. Deregulation of the existing laws relevant to rivers and

  1. Proceedings of the workshop on ion source issues relevant to a pulsed spallation neutron source: Part 1: Workshop summary

    International Nuclear Information System (INIS)

    Schroeder, L.; Leung, K.N.; Alonso, J.

    1994-10-01

    The workshop reviewed the ion-source requirements for high-power accelerator-driven spallation neutron facilities, and the performance of existing ion sources. Proposals for new facilities in the 1- to 5-MW range call for a widely differing set of ion-source requirements. For example, the source peak current requirements vary from 40 mA to 150 mA, while the duty factor ranges from 1% to 9%. Much of the workshop discussion centered on the state-of-the-art of negative hydrogen ion source (H - ) technology and the present experience with Penning and volume sources. In addition, other ion source technologies, for positive ions or CW applications were reviewed. Some of these sources have been operational at existing accelerator complexes and some are in the source-development stage on test stands

  2. Dipole estimation errors due to not incorporating anisotropic conductivities in realistic head models for EEG source analysis

    Science.gov (United States)

    Hallez, Hans; Staelens, Steven; Lemahieu, Ignace

    2009-10-01

    EEG source analysis is a valuable tool for brain functionality research and for diagnosing neurological disorders, such as epilepsy. It requires a geometrical representation of the human head or a head model, which is often modeled as an isotropic conductor. However, it is known that some brain tissues, such as the skull or white matter, have an anisotropic conductivity. Many studies reported that the anisotropic conductivities have an influence on the calculated electrode potentials. However, few studies have assessed the influence of anisotropic conductivities on the dipole estimations. In this study, we want to determine the dipole estimation errors due to not taking into account the anisotropic conductivities of the skull and/or brain tissues. Therefore, head models are constructed with the same geometry, but with an anisotropically conducting skull and/or brain tissue compartment. These head models are used in simulation studies where the dipole location and orientation error is calculated due to neglecting anisotropic conductivities of the skull and brain tissue. Results show that not taking into account the anisotropic conductivities of the skull yields a dipole location error between 2 and 25 mm, with an average of 10 mm. When the anisotropic conductivities of the brain tissues are neglected, the dipole location error ranges between 0 and 5 mm. In this case, the average dipole location error was 2.3 mm. In all simulations, the dipole orientation error was smaller than 10°. We can conclude that the anisotropic conductivities of the skull have to be incorporated to improve the accuracy of EEG source analysis. The results of the simulation, as presented here, also suggest that incorporation of the anisotropic conductivities of brain tissues is not necessary. However, more studies are needed to confirm these suggestions.

  3. Dipole estimation errors due to not incorporating anisotropic conductivities in realistic head models for EEG source analysis

    International Nuclear Information System (INIS)

    Hallez, Hans; Staelens, Steven; Lemahieu, Ignace

    2009-01-01

    EEG source analysis is a valuable tool for brain functionality research and for diagnosing neurological disorders, such as epilepsy. It requires a geometrical representation of the human head or a head model, which is often modeled as an isotropic conductor. However, it is known that some brain tissues, such as the skull or white matter, have an anisotropic conductivity. Many studies reported that the anisotropic conductivities have an influence on the calculated electrode potentials. However, few studies have assessed the influence of anisotropic conductivities on the dipole estimations. In this study, we want to determine the dipole estimation errors due to not taking into account the anisotropic conductivities of the skull and/or brain tissues. Therefore, head models are constructed with the same geometry, but with an anisotropically conducting skull and/or brain tissue compartment. These head models are used in simulation studies where the dipole location and orientation error is calculated due to neglecting anisotropic conductivities of the skull and brain tissue. Results show that not taking into account the anisotropic conductivities of the skull yields a dipole location error between 2 and 25 mm, with an average of 10 mm. When the anisotropic conductivities of the brain tissues are neglected, the dipole location error ranges between 0 and 5 mm. In this case, the average dipole location error was 2.3 mm. In all simulations, the dipole orientation error was smaller than 10 deg. We can conclude that the anisotropic conductivities of the skull have to be incorporated to improve the accuracy of EEG source analysis. The results of the simulation, as presented here, also suggest that incorporation of the anisotropic conductivities of brain tissues is not necessary. However, more studies are needed to confirm these suggestions.

  4. Performance Analysis for Bit Error Rate of DS- CDMA Sensor Network Systems with Source Coding

    Directory of Open Access Journals (Sweden)

    Haider M. AlSabbagh

    2012-03-01

    Full Text Available The minimum energy (ME coding combined with DS-CDMA wireless sensor network is analyzed in order to reduce energy consumed and multiple access interference (MAI with related to number of user(receiver. Also, the minimum energy coding which exploits redundant bits for saving power with utilizing RF link and On-Off-Keying modulation. The relations are presented and discussed for several levels of errors expected in the employed channel via amount of bit error rates and amount of the SNR for number of users (receivers.

  5. Investigating the error sources of the online state of charge estimation methods for lithium-ion batteries in electric vehicles

    Science.gov (United States)

    Zheng, Yuejiu; Ouyang, Minggao; Han, Xuebing; Lu, Languang; Li, Jianqiu

    2018-02-01

    Sate of charge (SOC) estimation is generally acknowledged as one of the most important functions in battery management system for lithium-ion batteries in new energy vehicles. Though every effort is made for various online SOC estimation methods to reliably increase the estimation accuracy as much as possible within the limited on-chip resources, little literature discusses the error sources for those SOC estimation methods. This paper firstly reviews the commonly studied SOC estimation methods from a conventional classification. A novel perspective focusing on the error analysis of the SOC estimation methods is proposed. SOC estimation methods are analyzed from the views of the measured values, models, algorithms and state parameters. Subsequently, the error flow charts are proposed to analyze the error sources from the signal measurement to the models and algorithms for the widely used online SOC estimation methods in new energy vehicles. Finally, with the consideration of the working conditions, choosing more reliable and applicable SOC estimation methods is discussed, and the future development of the promising online SOC estimation methods is suggested.

  6. Potential sources of analytical bias and error in selected trace element data-quality analyses

    Science.gov (United States)

    Paul, Angela P.; Garbarino, John R.; Olsen, Lisa D.; Rosen, Michael R.; Mebane, Christopher A.; Struzeski, Tedmund M.

    2016-09-28

    Potential sources of analytical bias and error associated with laboratory analyses for selected trace elements where concentrations were greater in filtered samples than in paired unfiltered samples were evaluated by U.S. Geological Survey (USGS) Water Quality Specialists in collaboration with the USGS National Water Quality Laboratory (NWQL) and the Branch of Quality Systems (BQS).Causes for trace-element concentrations in filtered samples to exceed those in associated unfiltered samples have been attributed to variability in analytical measurements, analytical bias, sample contamination either in the field or laboratory, and (or) sample-matrix chemistry. These issues have not only been attributed to data generated by the USGS NWQL but have been observed in data generated by other laboratories. This study continues the evaluation of potential analytical bias and error resulting from matrix chemistry and instrument variability by evaluating the performance of seven selected trace elements in paired filtered and unfiltered surface-water and groundwater samples collected from 23 sampling sites of varying chemistries from six States, matrix spike recoveries, and standard reference materials.Filtered and unfiltered samples have been routinely analyzed on separate inductively coupled plasma-mass spectrometry instruments. Unfiltered samples are treated with hydrochloric acid (HCl) during an in-bottle digestion procedure; filtered samples are not routinely treated with HCl as part of the laboratory analytical procedure. To evaluate the influence of HCl on different sample matrices, an aliquot of the filtered samples was treated with HCl. The addition of HCl did little to differentiate the analytical results between filtered samples treated with HCl from those samples left untreated; however, there was a small, but noticeable, decrease in the number of instances where a particular trace-element concentration was greater in a filtered sample than in the associated

  7. Release modes and processes relevant to source-term calculations at Yucca Mountain

    International Nuclear Information System (INIS)

    Apted, M.J.

    1994-01-01

    The feasibility of permanent disposal of radioactive high-level waste (HLW) in repositories located in deep geologic formations is being studied world-wide. The most credible release pathway is interaction between groundwater and nuclear waste forms, followed by migration of radionuclide-bearing groundwater to the accessible environment. Under hydrologically unsaturated conditions, vapor transport of volatile radionuclides is also possible. The near-field encompasses the waste packages composed of engineered barriers (e.g. man-made materials, such as vitrified waste forms, corrosion-resistant containers), while the far-field includes the natural barriers (e.g. host rock, hydrologic setting). Taken together, these two subsystems define a series of multiple, redundant barriers that act to assure the safe isolation of nuclear waste. In the U.S., the Department of energy (DOE) is investigating the feasibility of safe, long-term disposal of high-level nuclear waste at the Yucca Mountain site in Nevada. The proposed repository horizon is located in non-welded tuffs within the unsaturated zone (i.e. above the water table) at Yucca Mountain. The purpose of this paper is to describe the source-term models for radionuclide release from waste packages at Yucca Mountain site. The first section describes the conceptual release modes that are relevant for this site and waste package design, based on a consideration of the performance of currently proposed engineered barriers under expected and unexpected conditions. No attempt is made to asses the reasonableness nor probability of occurrence for any specific release mode. The following section reviews the waste-form characteristics that are required to model and constrain the release of radionuclides from the waste package. The next section present mathematical models for the conceptual release modes, selected from those that have been implemented into a probabilistic total system assessment code developed for the Electric Power

  8. Computer input devices: neutral party or source of significant error in manual lesion segmentation?

    Science.gov (United States)

    Chen, James Y; Seagull, F Jacob; Nagy, Paul; Lakhani, Paras; Melhem, Elias R; Siegel, Eliot L; Safdar, Nabile M

    2011-02-01

    Lesion segmentation involves outlining the contour of an abnormality on an image to distinguish boundaries between normal and abnormal tissue and is essential to track malignant and benign disease in medical imaging for clinical, research, and treatment purposes. A laser optical mouse and a graphics tablet were used by radiologists to segment 12 simulated reference lesions per subject in two groups (one group comprised three lesion morphologies in two sizes, one for each input device for each device two sets of six, composed of three morphologies in two sizes each). Time for segmentation was recorded. Subjects completed an opinion survey following segmentation. Error in contour segmentation was calculated using root mean square error. Error in area of segmentation was calculated compared to the reference lesion. 11 radiologists segmented a total of 132 simulated lesions. Overall error in contour segmentation was less with the graphics tablet than with the mouse (P Error in area of segmentation was not significantly different between the tablet and the mouse (P = 0.62). Time for segmentation was less with the tablet than the mouse (P = 0.011). All subjects preferred the graphics tablet for future segmentation (P = 0.011) and felt subjectively that the tablet was faster, easier, and more accurate (P = 0.0005). For purposes in which accuracy in contour of lesion segmentation is of the greater importance, the graphics tablet is superior to the mouse in accuracy with a small speed benefit. For purposes in which accuracy of area of lesion segmentation is of greater importance, the graphics tablet and mouse are equally accurate.

  9. Assessing Variability and Errors in Historical Runoff Forecasting with Physical Models and Alternative Data Sources

    Science.gov (United States)

    Penn, C. A.; Clow, D. W.; Sexstone, G. A.

    2017-12-01

    Water supply forecasts are an important tool for water resource managers in areas where surface water is relied on for irrigating agricultural lands and for municipal water supplies. Forecast errors, which correspond to inaccurate predictions of total surface water volume, can lead to mis-allocated water and productivity loss, thus costing stakeholders millions of dollars. The objective of this investigation is to provide water resource managers with an improved understanding of factors contributing to forecast error, and to help increase the accuracy of future forecasts. In many watersheds of the western United States, snowmelt contributes 50-75% of annual surface water flow and controls both the timing and volume of peak flow. Water supply forecasts from the Natural Resources Conservation Service (NRCS), National Weather Service, and similar cooperators use precipitation and snowpack measurements to provide water resource managers with an estimate of seasonal runoff volume. The accuracy of these forecasts can be limited by available snowpack and meteorological data. In the headwaters of the Rio Grande, NRCS produces January through June monthly Water Supply Outlook Reports. This study evaluates the accuracy of these forecasts since 1990, and examines what factors may contribute to forecast error. The Rio Grande headwaters has experienced recent changes in land cover from bark beetle infestation and a large wildfire, which can affect hydrological processes within the watershed. To investigate trends and possible contributing factors in forecast error, a semi-distributed hydrological model was calibrated and run to simulate daily streamflow for the period 1990-2015. Annual and seasonal watershed and sub-watershed water balance properties were compared with seasonal water supply forecasts. Gridded meteorological datasets were used to assess changes in the timing and volume of spring precipitation events that may contribute to forecast error. Additionally, a

  10. Review of current GPS methodologies for producing accurate time series and their error sources

    Science.gov (United States)

    He, Xiaoxing; Montillet, Jean-Philippe; Fernandes, Rui; Bos, Machiel; Yu, Kegen; Hua, Xianghong; Jiang, Weiping

    2017-05-01

    The Global Positioning System (GPS) is an important tool to observe and model geodynamic processes such as plate tectonics and post-glacial rebound. In the last three decades, GPS has seen tremendous advances in the precision of the measurements, which allow researchers to study geophysical signals through a careful analysis of daily time series of GPS receiver coordinates. However, the GPS observations contain errors and the time series can be described as the sum of a real signal and noise. The signal itself can again be divided into station displacements due to geophysical causes and to disturbing factors. Examples of the latter are errors in the realization and stability of the reference frame and corrections due to ionospheric and tropospheric delays and GPS satellite orbit errors. There is an increasing demand on detecting millimeter to sub-millimeter level ground displacement signals in order to further understand regional scale geodetic phenomena hence requiring further improvements in the sensitivity of the GPS solutions. This paper provides a review spanning over 25 years of advances in processing strategies, error mitigation methods and noise modeling for the processing and analysis of GPS daily position time series. The processing of the observations is described step-by-step and mainly with three different strategies in order to explain the weaknesses and strengths of the existing methodologies. In particular, we focus on the choice of the stochastic model in the GPS time series, which directly affects the estimation of the functional model including, for example, tectonic rates, seasonal signals and co-seismic offsets. Moreover, the geodetic community continues to develop computational methods to fully automatize all phases from analysis of GPS time series. This idea is greatly motivated by the large number of GPS receivers installed around the world for diverse applications ranging from surveying small deformations of civil engineering structures (e

  11. Evaluation of the sources of error in the linepack estimation of a natural gas pipeline

    Energy Technology Data Exchange (ETDEWEB)

    Marco, Fabio Capelassi Gavazzi de [Transportadora Brasileira Gasoduto Bolivia-Brasil S.A. (TBG), Rio de Janeiro, RJ (Brazil)

    2012-07-01

    The intent of this work is to explore the behavior of the random error associated with determination of linepack in a complex natural gas pipeline based on the effect introduced by the uncertainty of the different variables involved. There are many parameters involved in the determination of the gas inventory in a transmission pipeline: geometrical (diameter, length and elevation profile), operational (pressure, temperature and gas composition), environmental (ambient / ground temperature) and those dependent on the modeling assumptions (compressibility factor and heat transfer coefficient). Due to the extent of a natural gas pipeline and the vast amount of sensor involved it is infeasible to determine analytically the magnitude of resulting uncertainty in the linepack, thus this problem has been addressed using Monte Carlo Method. The approach consists of introducing random errors in the values of pressure, temperature and gas gravity that are employed in the determination of the linepack and verify its impact. Additionally, the errors associated with three different modeling assumptions to estimate the linepack are explored. The results reveal that pressure is the most critical variable while the temperature is the less critical. In regard to the different methods to estimate the linepack, deviations around 1.6% were verified among the methods. (author)

  12. Sources of Phoneme Errors in Repetition: Perseverative, Neologistic, and Lesion Patterns in Jargon Aphasia

    Directory of Open Access Journals (Sweden)

    Emma Pilkington

    2017-05-01

    Full Text Available This study examined patterns of neologistic and perseverative errors during word repetition in fluent Jargon aphasia. The principal hypotheses accounting for Jargon production indicate that poor activation of a target stimulus leads to weakly activated target phoneme segments, which are outcompeted at the phonological encoding level. Voxel-lesion symptom mapping studies of word repetition errors suggest a breakdown in the translation from auditory-phonological analysis to motor activation. Behavioral analyses of repetition data were used to analyse the target relatedness (Phonological Overlap Index: POI of neologistic errors and patterns of perseveration in 25 individuals with Jargon aphasia. Lesion-symptom analyses explored the relationship between neurological damage and jargon repetition in a group of 38 aphasia participants. Behavioral results showed that neologisms produced by 23 jargon individuals contained greater degrees of target lexico-phonological information than predicted by chance and that neologistic and perseverative production were closely associated. A significant relationship between jargon production and lesions to temporoparietal regions was identified. Region of interest regression analyses suggested that damage to the posterior superior temporal gyrus and superior temporal sulcus in combination was best predictive of a Jargon aphasia profile. Taken together, these results suggest that poor phonological encoding, secondary to impairment in sensory-motor integration, alongside impairments in self-monitoring result in jargon repetition. Insights for clinical management and future directions are discussed.

  13. Sources of Data and Expertise for Environmental Factors Relevant to Amphibious Operations

    National Research Council Canada - National Science Library

    Andrew, Colin

    2000-01-01

    .... Before embarking on a research program it seemed worthwhile to survey the institutions and personnel who already have expertise in the gathering and analysis of relevant environmental data types...

  14. Systematical and statistical errors in using reference light sources to calibrate TLD readers

    International Nuclear Information System (INIS)

    Burgkhardt, B.; Piesch, E.

    1981-01-01

    Three light sources, namely an NaI(Tl) scintillator + Ra, an NaI(Tl) scintillator + 14 C and a plastic scintillator + 14 C, were used during a period of 24 months for a daily check of two TLD readers: the Harshaw 2000 A + B and the Toledo 651. On the basis of light source measurements long-term changes and day-to-day fluctuations of the reader response were investigated. Systematical changes of the Toledo reader response of up to 6% during a working week are explained by nitrogen effects in the plastic scintillator light source. It was found that the temperature coefficient of the light source intensity was -0.05%/ 0 C for the plastic scintillator and -0.3%/ 0 C for the NaI(Tl) scintillator. The 210 Pb content in the Ra activated NaI(Tl) scintillator caused a time-dependent decrease in light source intensity of 3%/yr for the light source in the Harshaw reader. The internal light sources revealed a relative standard deviation of 0.5% for the Toledo reader and the Harshaw reader after respective reading times of 0.45 and 100 sec. (author)

  15. Main error sources in sorbtion technique and plasma electron component parameter definition by continuous X radiation

    International Nuclear Information System (INIS)

    Gavrilov, V.V.; Torokhova, N.V.; Fasakhov, I.K.

    1986-01-01

    Recombination radiation effect on the relation of signals behind the filters depending on the plasma temperature(sorption method for T determination) is demonstrated. This factor produces the main effect on the method accuracy (100-400%), the other factors analysed in combination make an error in temperature at the level of 50%. Method of plasma electron distribution function reconstruction by continuous x-radiation spectrum, based on the correctness (under certain limitations for the required function) of the equation, linking the electron distribution function with bremmsstrahlung spectral density is presented

  16. On the errors in measurements of Ohio 5 radio sources in the light of the GB survey

    International Nuclear Information System (INIS)

    Machalski, J.

    1975-01-01

    Positions and flux densities of 405 OSU 5 radio sources surveyed at 1415 MHz down to 0.18 f.u. (Brundage et al. 1971) have been examined in the light of data from the GB survey made at 1400 MHz (Maslowski 1972). An identification analysis has shown that about 56% of OSU sources reveal themselves as single, 18% - as confused, 20% - as unresolved and 6% - having no counterparts in the GB survey down to 0.09 f.u. - seem to be spurious. The single OSU sources are strongly affected by the underestimation of their flux densities due to base-line procedure in their vicinity. The average value of about 0.03 f.u. has been found for the systematic underestimation. The second systematic error is due to the presence of a significant number of confused sources with strong overestimation of their flux densities. The confusion effect gives a characteristic non-Gaussian tail in the difference distribution between observed and real flux densities. The confusion effect has a strong influence on source counts from the OSU 5 survey. Differential number-counts relatively to that from the GB survey shows that the counts agree between themselves within the statistical uncertainty up to about 0.40 f.u., which is approximately 4 delta (delta - average rms flux density error in the OSU 5 survey). Below 0.40 f.u. the number of sources missing due to the confusion effect is significantly greater than the number-overestimation due to the noise error. Thus, this part of the OSU 5 source counts cannot be treated seriously, even in the statistical sense. An analysis of the approximate reliability and completeness of the OSU 5 survey shows that, although the total reliability estimated by the authors of the survey is good, the completeness is significantly lower due to the underestimation of the confusion effect magnitude. In fact, the OSU 5 completeness is 67% at 0.18 f.u. and 79% at 0.25 f.u. (author)

  17. Inclusions in bone material as a source of error in radiocarbon dating

    International Nuclear Information System (INIS)

    Hassan, A.A.; Ortner, D.J.

    1977-01-01

    Electron probe microanalysis, X-ray diffraction and microscopic examination were conducted on bone material from several archaeological sites in order to identify post-burial inclusions which, if present, may affect radiocarbon dating of bone. Two types of inclusions were identified: (1) precipitates from ground water solutions, and (2) solid intrusion. The first type consists of calcite, pyrite, humates and an unknown material. The second type includes quartz grains, hyphae, rootlets, wood and charcoal. Precipitation of calcite in a macro-molecular level in bone may lead to erroneaous dating of bone apatite if such calcite was not removed completely. A special technique, therefore, must be employed to remove calcite comletely. Hyphae and rootlets also are likely to induce errors in radiocarbon dating of bone collagen. These very fine inclusions require more than hand picking. (author)

  18. Imagery encoding and false recognition errors: Examining the role of imagery process and imagery content on source misattributions.

    Science.gov (United States)

    Foley, Mary Ann; Foy, Jeffrey; Schlemmer, Emily; Belser-Ehrlich, Janna

    2010-11-01

    Imagery encoding effects on source-monitoring errors were explored using the Deese-Roediger-McDermott paradigm in two experiments. While viewing thematically related lists embedded in mixed picture/word presentations, participants were asked to generate images of objects or words (Experiment 1) or to simply name the items (Experiment 2). An encoding task intended to induce spontaneous images served as a control for the explicit imagery instruction conditions (Experiment 1). On the picture/word source-monitoring tests, participants were much more likely to report "seeing" a picture of an item presented as a word than the converse particularly when images were induced spontaneously. However, this picture misattribution error was reversed after generating images of words (Experiment 1) and was eliminated after simply labelling the items (Experiment 2). Thus source misattributions were sensitive to the processes giving rise to imagery experiences (spontaneous vs deliberate), the kinds of images generated (object vs word images), and the ways in which materials were presented (as pictures vs words).

  19. Free and Open Source GIS Tools: Role and Relevance in the Environmental Assessment Community

    Science.gov (United States)

    The presence of an explicit geographical context in most environmental decisions can complicate assessment and selection of management options. These decisions typically involve numerous data sources, complex environmental and ecological processes and their associated models, ris...

  20. [Sources of error in the European Pharmacopoeia assay of halide salts of organic bases by titration with alkali].

    Science.gov (United States)

    Kószeginé, S H; Ráfliné, R Z; Paál, T; Török, I

    2000-01-01

    A short overview has been given by the authors on the titrimetric assay methods of halide salts of organic bases in the pharmacopoeias of greatest importance. The alternative procedures introduced by the European Pharmacopoeia Commission some years ago to replace the non-aqueous titration with perchloric acid in the presence of mercuric acetate have also been presented and evaluated. The authors investigated the limits of applicability and the sources of systematic errors (bias) of the strongly preferred titration with sodium hydroxide in an alcoholic medium. To assess the bias due to the differences between the results calculated from the two inflexion points of the titration curves and the two real endpoints corresponding to the strong and weak acids, respectively, the mathematical analysis of the titration curve function was carried out. This bias, generally negligible when the pH change near the endpoint of the titration is more than 1 unit, is the function of the concentration, the apparent pK of the analyte and the ionic product of water (ethanol) in the alcohol-water mixtures. Using the validation data gained for the method with the titration of ephedrine hydrochloride the authors analysed the impact of carbon dioxide in the titration medium on the additive and proportional systematic errors of the method. The newly introduced standardisation procedure of the European Pharmacopoeia for the sodium hydroxide titrant to decrease the systematic errors caused by carbon dioxide has also been evaluated.

  1. Active control of aircraft engine inlet noise using compact sound sources and distributed error sensors

    Science.gov (United States)

    Burdisso, Ricardo (Inventor); Fuller, Chris R. (Inventor); O'Brien, Walter F. (Inventor); Thomas, Russell H. (Inventor); Dungan, Mary E. (Inventor)

    1996-01-01

    An active noise control system using a compact sound source is effective to reduce aircraft engine duct noise. The fan noise from a turbofan engine is controlled using an adaptive filtered-x LMS algorithm. Single multi channel control systems are used to control the fan blade passage frequency (BPF) tone and the BPF tone and the first harmonic of the BPF tone for a plane wave excitation. A multi channel control system is used to control any spinning mode. The multi channel control system to control both fan tones and a high pressure compressor BPF tone simultaneously. In order to make active control of turbofan inlet noise a viable technology, a compact sound source is employed to generate the control field. This control field sound source consists of an array of identical thin, cylindrically curved panels with an inner radius of curvature corresponding to that of the engine inlet. These panels are flush mounted inside the inlet duct and sealed on all edges to prevent leakage around the panel and to minimize the aerodynamic losses created by the addition of the panels. Each panel is driven by one or more piezoelectric force transducers mounted on the surface of the panel. The response of the panel to excitation is maximized when it is driven at its resonance; therefore, the panel is designed such that its fundamental frequency is near the tone to be canceled, typically 2000-4000 Hz.

  2. Sources of Error and the Statistical Formulation of M S: m b Seismic Event Screening Analysis

    Science.gov (United States)

    Anderson, D. N.; Patton, H. J.; Taylor, S. R.; Bonner, J. L.; Selby, N. D.

    2014-03-01

    The Comprehensive Nuclear-Test-Ban Treaty (CTBT), a global ban on nuclear explosions, is currently in a ratification phase. Under the CTBT, an International Monitoring System (IMS) of seismic, hydroacoustic, infrasonic and radionuclide sensors is operational, and the data from the IMS is analysed by the International Data Centre (IDC). The IDC provides CTBT signatories basic seismic event parameters and a screening analysis indicating whether an event exhibits explosion characteristics (for example, shallow depth). An important component of the screening analysis is a statistical test of the null hypothesis H 0: explosion characteristics using empirical measurements of seismic energy (magnitudes). The established magnitude used for event size is the body-wave magnitude (denoted m b) computed from the initial segment of a seismic waveform. IDC screening analysis is applied to events with m b greater than 3.5. The Rayleigh wave magnitude (denoted M S) is a measure of later arriving surface wave energy. Magnitudes are measurements of seismic energy that include adjustments (physical correction model) for path and distance effects between event and station. Relative to m b, earthquakes generally have a larger M S magnitude than explosions. This article proposes a hypothesis test (screening analysis) using M S and m b that expressly accounts for physical correction model inadequacy in the standard error of the test statistic. With this hypothesis test formulation, the 2009 Democratic Peoples Republic of Korea announced nuclear weapon test fails to reject the null hypothesis H 0: explosion characteristics.

  3. Soft X-ray sources and their optical counterparts in the error box of the COS-B source 2CG 135+01

    Energy Technology Data Exchange (ETDEWEB)

    Caraveo, P A; Bignami, G F [Consiglio Nazionale delle Ricerche, Milan (Italy). Lab. di Fisica Cosmica e Tecnologie Relative; Paul, J A [CEA Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France). Section d' Astrophysique; Marano, B [Bologna Univ. (Italy). Ist. di Astronomia; Vettolani, G P [Consiglio Nazionale delle Ricerche, Bologna (Italy). Lab. di Radioastronomia

    1981-01-01

    We shall present here the Einstein observations for the 2CG 135+01 region where the results are complete in the sense that we have a satisfactory coverage of the COS-B error box and, more important, that all the IPC sources found have been identified, through both HRI and optical observations. In particular, the new spectral classifications of the present work were obtained at the Lojano Observatory (Bologna, Italy) with the Boller and Chivens spectrograph at the Cassegrain focus of the 1.52 in telescope. The spectral dispersion is 80 A/mm.

  4. Statistics of past errors as a source of safety factors for current models

    International Nuclear Information System (INIS)

    Shlyakhter, A.I.

    1994-01-01

    Results of a comparative analysis of actual vs. estimated uncertainty in several data sets derived from natural and social sciences are presented. Data sets include: (i) time trends in the sequential measurements of the same physical quantity; (ii) environmental measurements of uranium in soil, (iii) national population projections; (iv) projections for the United States' energy sector. Probabilities of large deviations from the true values are parametrized by an exponential distribution with the slope determined by the data. One can hedge against unsuspected uncertainties by inflating reported uncertainty range by a default safety factor determined from the relevant historical data sets. This emperical approach can be used in the uncertainty analysis of the low probability/high consequence events, such as risk of global warming

  5. Dimensional Error in Rapid Prototyping with Open Source Software and Low-cost 3D-printer.

    Science.gov (United States)

    Rendón-Medina, Marco A; Andrade-Delgado, Laura; Telich-Tarriba, Jose E; Fuente-Del-Campo, Antonio; Altamirano-Arcos, Carlos A

    2018-01-01

    Rapid prototyping models (RPMs) had been extensively used in craniofacial and maxillofacial surgery, especially in areas such as orthognathic surgery, posttraumatic or oncological reconstructions, and implantology. Economic limitations are higher in developing countries such as Mexico, where resources dedicated to health care are limited, therefore limiting the use of RPM to few selected centers. This article aims to determine the dimensional error of a low-cost fused deposition modeling 3D printer (Tronxy P802MA, Shenzhen, Tronxy Technology Co), with Open source software. An ordinary dry human mandible was scanned with a computed tomography device. The data were processed with open software to build a rapid prototype with a fused deposition machine. Linear measurements were performed to find the mean absolute and relative difference. The mean absolute and relative difference was 0.65 mm and 1.96%, respectively ( P = 0.96). Low-cost FDM machines and Open Source Software are excellent options to manufacture RPM, with the benefit of low cost and a similar relative error than other more expensive technologies.

  6. Assessing error sources for Landsat time series analysis for tropical test sites in Viet Nam and Ethiopia

    Science.gov (United States)

    Schultz, Michael; Verbesselt, Jan; Herold, Martin; Avitabile, Valerio

    2013-10-01

    Researchers who use remotely sensed data can spend half of their total effort analysing prior data. If this data preprocessing does not match the application, this time spent on data analysis can increase considerably and can lead to inaccuracies. Despite the existence of a number of methods for pre-processing Landsat time series, each method has shortcomings, particularly for mapping forest changes under varying illumination, data availability and atmospheric conditions. Based on the requirements of mapping forest changes as defined by the United Nations (UN) Reducing Emissions from Forest Degradation and Deforestation (REDD) program, the accurate reporting of the spatio-temporal properties of these changes is necessary. We compared the impact of three fundamentally different radiometric preprocessing techniques Moderate Resolution Atmospheric TRANsmission (MODTRAN), Second Simulation of a Satellite Signal in the Solar Spectrum (6S) and simple Dark Object Subtraction (DOS) on mapping forest changes using Landsat time series data. A modification of Breaks For Additive Season and Trend (BFAST) monitor was used to jointly map the spatial and temporal agreement of forest changes at test sites in Ethiopia and Viet Nam. The suitability of the pre-processing methods for the occurring forest change drivers was assessed using recently captured Ground Truth and high resolution data (1000 points). A method for creating robust generic forest maps used for the sampling design is presented. An assessment of error sources has been performed identifying haze as a major source for time series analysis commission error.

  7. Dimensional Error in Rapid Prototyping with Open Source Software and Low-cost 3D-printer

    Directory of Open Access Journals (Sweden)

    Marco A. Rendón-Medina

    2018-01-01

    Full Text Available Summary:. Rapid prototyping models (RPMs had been extensively used in craniofacial and maxillofacial surgery, especially in areas such as orthognathic surgery, posttraumatic or oncological reconstructions, and implantology. Economic limitations are higher in developing countries such as Mexico, where resources dedicated to health care are limited, therefore limiting the use of RPM to few selected centers. This article aims to determine the dimensional error of a low-cost fused deposition modeling 3D printer (Tronxy P802MA, Shenzhen, Tronxy Technology Co, with Open source software. An ordinary dry human mandible was scanned with a computed tomography device. The data were processed with open software to build a rapid prototype with a fused deposition machine. Linear measurements were performed to find the mean absolute and relative difference. The mean absolute and relative difference was 0.65 mm and 1.96%, respectively (P = 0.96. Low-cost FDM machines and Open Source Software are excellent options to manufacture RPM, with the benefit of low cost and a similar relative error than other more expensive technologies.

  8. Potential Functional Embedding Theory at the Correlated Wave Function Level. 2. Error Sources and Performance Tests.

    Science.gov (United States)

    Cheng, Jin; Yu, Kuang; Libisch, Florian; Dieterich, Johannes M; Carter, Emily A

    2017-03-14

    Quantum mechanical embedding theories partition a complex system into multiple spatial regions that can use different electronic structure methods within each, to optimize trade-offs between accuracy and cost. The present work incorporates accurate but expensive correlated wave function (CW) methods for a subsystem containing the phenomenon or feature of greatest interest, while self-consistently capturing quantum effects of the surroundings using fast but less accurate density functional theory (DFT) approximations. We recently proposed two embedding methods [for a review, see: Acc. Chem. Res. 2014 , 47 , 2768 ]: density functional embedding theory (DFET) and potential functional embedding theory (PFET). DFET provides a fast but non-self-consistent density-based embedding scheme, whereas PFET offers a more rigorous theoretical framework to perform fully self-consistent, variational CW/DFT calculations [as defined in part 1, CW/DFT means subsystem 1(2) is treated with CW(DFT) methods]. When originally presented, PFET was only tested at the DFT/DFT level of theory as a proof of principle within a planewave (PW) basis. Part 1 of this two-part series demonstrated that PFET can be made to work well with mixed Gaussian type orbital (GTO)/PW bases, as long as optimized GTO bases and consistent electron-ion potentials are employed throughout. Here in part 2 we conduct the first PFET calculations at the CW/DFT level and compare them to DFET and full CW benchmarks. We test the performance of PFET at the CW/DFT level for a variety of types of interactions (hydrogen bonding, metallic, and ionic). By introducing an intermediate CW/DFT embedding scheme denoted DFET/PFET, we show how PFET remedies different types of errors in DFET, serving as a more robust type of embedding theory.

  9. Testing the Motor Simulation Account of Source Errors for Actions in Recall

    Directory of Open Access Journals (Sweden)

    Nicholas Lange

    2017-09-01

    Full Text Available Observing someone else perform an action can lead to false memories of self-performance – the observation inflation effect. One explanation is that action simulation via mirror neuron activation during action observation is responsible for observation inflation by enriching memories of observed actions with motor representations. In three experiments we investigated this account of source memory failures, using a novel paradigm that minimized influences of verbalization and prior object knowledge. Participants worked in pairs to take turns acting out geometric shapes and letters. The next day, participants recalled either actions they had performed or those they had observed. Experiment 1 showed that participants falsely retrieved observed actions as self-performed, but also retrieved self-performed actions as observed. Experiment 2 showed that preventing participants from encoding observed actions motorically by taxing their motor system with a concurrent motor task did not lead to the predicted decrease in false claims of self-performance. Indeed, Experiment 3 showed that this was the case even if participants were asked to carefully monitor their recall. Because our data provide no evidence for a motor activation account, we also discussed our results in light of a source monitoring account.

  10. Brachytherapy Partial Breast Irradiation: Analyzing Effect of Source Configurations on Dose Metrics Relevant to Toxicity

    International Nuclear Information System (INIS)

    Cormack, Robert A.; Devlin, Phillip M.

    2008-01-01

    Purpose: Recently, the use of partial breast irradiation (PBI) for patients with early-stage breast cancer with low-risk factors has increased. The volume of the high-dose regions has been correlated with toxicity in interstitial treatment. Although no such associations have been made in applicator-based experience, new applicators are being developed that use complex noncentered source configurations. This work studied the effect of noncentered source placements on the volume of the high-dose regions around a spherical applicator. Methods and Materials: Many applicator configurations were numerically simulated for a range of inflation radii. For each configuration, a dose homogeneity index was used as a dose metric to measure the volume of the high-dose region. Results: All multisource configurations examined resulted in an increase of the high-dose region compared with a single-center source. The resulting decrease in the prescription dose homogeneity index was more pronounced for sources further from the center of the applicator, and the effect was reduced as the number of dwell locations was increased. Conclusion: The geometries of particular applicators were not considered to achieve a more general result. On the basis of the calculations of this work, it would appear that treatment using noncentered dwell locations will lead to an increase in the volume of the high-dose regions

  11. The identical-twin transfusion syndrome: a source of error in estimating IQ resemblance and heritability.

    Science.gov (United States)

    Munsinger, H

    1977-01-01

    Published studies show that among identical twins, lower birthweight is associated with lower adult intelligence. However, no such relation between birthweight and adult IQ exists among fraternal twins. A likely explanation for the association between birthweight and intelligence among identical twins is the identical twin transfusion syndrome which occurs only between some monochorionic identical twin pairs. The IQ scores from separated identical twins were reanalysed to explore the consequences of identical twin transfusion syndrome for IQ resemblance and heritability. Among 129 published cases of identical twin pairs reared apart, 76 pairs contained some birthweight information. The 76 pairs were separated into three classes: 23 pairs in which there was clear evidence of a substantial birthweight differences (indicating the probable existence of the identical twin transfusion syndrome), 27 pairs in which the information on birthweight was ambiguous (?), and 26 pairs in which there was clear evidence that the twins were similar in birthweight. The reanalyses showed: (1) birthweight differences are positively associated with IQ differences in the total sample of separated identical twins; (2) within the group of 23 twin pairs who showed large birthweight differences, there was a positive relation between birthweight differences and IQ differences; (3) when heritability of IQ is estimated for those twins who do not suffer large birthweight differences, the resemblance (and thus, h2/b) of the separated identical twins' IG is 0-95. Given that the average reliability of the individual IQ test is around 0-95, these data suggest that genetic factors and errors of measurement cause the individual differences in IQ among human beings. Because of the identical twin transfusion syndrome, previous studies of MZ twins have underestimated the effect of genetic factors on IQ. An analysis of the IQs for heavier and lighter birthweight twins suggests that the main effect of the

  12. Standardizing electrophoresis conditions: how to eliminate a major source of error in the comet assay.

    Directory of Open Access Journals (Sweden)

    Gunnar Brunborg

    2015-06-01

    Full Text Available In the alkaline comet assay, cells are embedded in agarose, lysed, and then subjected to further processing including electrophoresis at high pH (>13. We observed very large variations of mean comet tail lengths of cell samples from the same population when spread on a glass or plastic substrate and subjected to electrophoresis. These variations might be cancelled out if comets are scored randomly over a large surface, or if all the comets are scored. The mean tail length may then be representative of the population, although its standard error is large. However, the scoring process often involves selection of 50 – 100 comets in areas selected in an unsystematic way from a large gel on a glass slide. When using our 96-sample minigel format (1, neighbouring sample variations are easily detected. We have used this system to study the cause of the comet assay variations during electrophoresis and we have defined experimental conditions which reduce the variations to a minimum. We studied the importance of various physical parameters during electrophoresis: (i voltage; (ii duration of electrophoresis; (iii electric current; (iv temperature; and (v agarose concentration. We observed that the voltage (V/cm varied substantially during electrophoresis, even within a few millimetres of distance between gel samples. Not unexpectedly, both the potential ( V/cm and the time were linearly related to the mean comet tail, whereas the current was not. By measuring the local voltage with microelectrodes a few millimetres apart, we observed substantial local variations in V/cm, and they increased with time. This explains the large variations in neighbouring sample comet tails of 25% or more. By introducing simple technology (circulation of the solution during electrophoresis, and temperature control, these variations in mean comet tail were largely abolished, as were the V/cm variations. Circulation was shown to be particularly important and optimal conditions

  13. The U.S. Navy's Global Wind-Wave Models: An Investigation into Sources of Errors in Low-Frequency Energy Predictions

    National Research Council Canada - National Science Library

    Rogers, W

    2002-01-01

    This report describes an investigation to determine the relative importance of various sources of error in the two global-scale models of wind-generated surface waves used operationally by the U.S. Navy...

  14. Performance of multi-aperture grid extraction systems for an ITER-relevant RF-driven negative hydrogen ion source

    Science.gov (United States)

    Franzen, P.; Gutser, R.; Fantz, U.; Kraus, W.; Falter, H.; Fröschle, M.; Heinemann, B.; McNeely, P.; Nocentini, R.; Riedl, R.; Stäbler, A.; Wünderlich, D.

    2011-07-01

    The ITER neutral beam system requires a negative hydrogen ion beam of 48 A with an energy of 0.87 MeV, and a negative deuterium beam of 40 A with an energy of 1 MeV. The beam is extracted from a large ion source of dimension 1.9 × 0.9 m2 by an acceleration system consisting of seven grids with 1280 apertures each. Currently, apertures with a diameter of 14 mm in the first grid are foreseen. In 2007, the IPP RF source was chosen as the ITER reference source due to its reduced maintenance compared with arc-driven sources and the successful development at the BATMAN test facility of being equipped with the small IPP prototype RF source ( {\\sim}\\frac{1}{8} of the area of the ITER NBI source). These results, however, were obtained with an extraction system with 8 mm diameter apertures. This paper reports on the comparison of the source performance at BATMAN of an ITER-relevant extraction system equipped with chamfered apertures with a 14 mm diameter and 8 mm diameter aperture extraction system. The most important result is that there is almost no difference in the achieved current density—being consistent with ion trajectory calculations—and the amount of co-extracted electrons. Furthermore, some aspects of the beam optics of both extraction systems are discussed.

  15. Performance of multi-aperture grid extraction systems for an ITER-relevant RF-driven negative hydrogen ion source

    International Nuclear Information System (INIS)

    Franzen, P.; Gutser, R.; Fantz, U.; Kraus, W.; Falter, H.; Froeschle, M.; Heinemann, B.; McNeely, P.; Nocentini, R.; Riedl, R.; Staebler, A.; Wuenderlich, D.

    2011-01-01

    The ITER neutral beam system requires a negative hydrogen ion beam of 48 A with an energy of 0.87 MeV, and a negative deuterium beam of 40 A with an energy of 1 MeV. The beam is extracted from a large ion source of dimension 1.9 x 0.9 m 2 by an acceleration system consisting of seven grids with 1280 apertures each. Currently, apertures with a diameter of 14 mm in the first grid are foreseen. In 2007, the IPP RF source was chosen as the ITER reference source due to its reduced maintenance compared with arc-driven sources and the successful development at the BATMAN test facility of being equipped with the small IPP prototype RF source ( ∼ 1/8 of the area of the ITER NBI source). These results, however, were obtained with an extraction system with 8 mm diameter apertures. This paper reports on the comparison of the source performance at BATMAN of an ITER-relevant extraction system equipped with chamfered apertures with a 14 mm diameter and 8 mm diameter aperture extraction system. The most important result is that there is almost no difference in the achieved current density-being consistent with ion trajectory calculations-and the amount of co-extracted electrons. Furthermore, some aspects of the beam optics of both extraction systems are discussed.

  16. Triplet Excited States as a Source of Relevant (Bio)Chemical Information

    OpenAIRE

    Jiménez Molero, María Consuelo; Miranda Alonso, Miguel Ángel

    2014-01-01

    The properties of triplet excited states are markedly medium-dependent, which turns this species into valuable tools for investigating the microenvironments existing in protein binding pockets. Monitoring of the triplet excited state behavior of drugs within transport proteins (serum albumins and alpha(1)-acid glycoproteins) by laser flash photolysis constitutes a valuable source of information on the strength of interaction, conformational freedom and protection from oxygen or other external...

  17. Influence of information sources on hepatitis B screening behavior and relevant psychosocial factors among Asian immigrants.

    Science.gov (United States)

    Tanaka, Miho; Strong, Carol; Lee, Sunmin; Juon, Hee-Soon

    2013-08-01

    This study examines how different information sources relate to Health Belief Model constructs, hepatitis B virus (HBV) knowledge, and HBV screening. The Maryland Asian American Liver Cancer Education Program administered a survey of 877 Asian immigrants. The most common sources of information identified by the multiple-answer questions were newspapers (39.8 %), physicians (39.3 %), friends (33.8 %), TV (31.7 %), and the Internet (29.5 %). Path analyses-controlling for age, sex, educational level, English proficiency, proportion of life in U.S., health insurance coverage, and family history of HBV infection-showed that learning about HBV from physicians had the strongest direct effect; friends had a marginal indirect effect. Perceived risk, benefits, and severity played limited roles in mediation effects. Path analysis results differed by ethnicity. Physician-based HBV screening intervention would be effective, but should be complemented with community health campaigns through popular information sources for the uninsured.

  18. Coulomb disintegration as an information source for relevant processes in nuclear astrophysics

    International Nuclear Information System (INIS)

    Bertulani, C.A.

    1989-01-01

    The possibility of obtaining the photodisintegration cross section using the equivalent-photon number method first deduced and employed for the Coulomb disintegration processes has been suggested. This is very interesting because there exist radioactive capture processes, related to the photodisintegration through time reversal, that are relevant in astrophysics. In this paper, the recent results of the Karlsruhe and the Texas A and M groups on the Coulomb disintegration of 6 Li and 7 Li and the problems of the method are discussed. The ideas developed in a previous paper (Nucl. Phys. A458 (1986) 188) are confirmed qualitatively. To understand the process quantitatively it is necessary to use a quantum treatment that would imply the introduction of Coulomb excitation effects of higher orders. The Coulomb disintegration of exotic secondary beams is also studied. It is particularly interesting the question about what kind of nuclear structure information, as binding energies of momentum distributions, may be obtained. (Author) [es

  19. Perceived relevance and information needs regarding food topics and preferred information sources among Dutch adults: results of a quantitative consumer study

    NARCIS (Netherlands)

    Dillen, van S.M.E.; Hiddink, G.J.; Koelen, M.A.; Graaf, de C.; Woerkum, van C.M.J.

    2004-01-01

    Objective: For more effective nutrition communication, it is crucial to identify sources from which consumers seek information. Our purpose was to assess perceived relevance and information needs regarding food topics, and preferred information sources by means of quantitative consumer research.

  20. From the Lab to the real world : sources of error in UF {sub 6} gas enrichment monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Lombardi, Marcie L.

    2012-03-01

    {sub 6} gas enrichment monitors have required empty pipe measurements to accurately determine the pipe attenuation (the pipe attenuation is typically much larger than the attenuation in the gas). This dissertation reports on a method for determining the thickness of a pipe in a GCEP when obtaining an empty pipe measurement may not be feasible. This dissertation studies each of the components that may add to the final error in the enrichment measurement, and the factors that were taken into account to mitigate these issues are also detailed and tested. The use of an x-ray generator as a transmission source and the attending stability issues are addressed. Both analytical calculations and experimental measurements have been used. For completeness, some real-world analysis results from the URENCO Capenhurst enrichment plant have been included, where the final enrichment error has remained well below 1% for approximately two months.

  1. Error related negativity and multi-source interference task in children with attention deficit hyperactivity disorder-combined type

    Directory of Open Access Journals (Sweden)

    Rosana Huerta-Albarrán

    2015-03-01

    Full Text Available Objective To compare performance of children with attention deficit hyperactivity disorders-combined (ADHD-C type with control children in multi-source interference task (MSIT evaluated by means of error related negativity (ERN. Method We studied 12 children with ADHD-C type with a median age of 7 years, control children were age- and gender-matched. Children performed MSIT and simultaneous recording of ERN. Results We found no differences in MSIT parameters among groups. We found no differences in ERN variables between groups. We found a significant association of ERN amplitude with MSIT in children with ADHD-C type. Some correlation went in positive direction (frequency of hits and MSIT amplitude, and others in negative direction (frequency of errors and RT in MSIT. Conclusion Children with ADHD-C type exhibited a significant association between ERN amplitude with MSIT. These results underline participation of a cingulo-fronto-parietal network and could help in the comprehension of pathophysiological mechanisms of ADHD.

  2. Operator- and software-related post-experimental variability and source of error in 2-DE analysis.

    Science.gov (United States)

    Millioni, Renato; Puricelli, Lucia; Sbrignadello, Stefano; Iori, Elisabetta; Murphy, Ellen; Tessari, Paolo

    2012-05-01

    In the field of proteomics, several approaches have been developed for separating proteins and analyzing their differential relative abundance. One of the oldest, yet still widely used, is 2-DE. Despite the continuous advance of new methods, which are less demanding from a technical standpoint, 2-DE is still compelling and has a lot of potential for improvement. The overall variability which affects 2-DE includes biological, experimental, and post-experimental (software-related) variance. It is important to highlight how much of the total variability of this technique is due to post-experimental variability, which, so far, has been largely neglected. In this short review, we have focused on this topic and explained that post-experimental variability and source of error can be further divided into those which are software-dependent and those which are operator-dependent. We discuss these issues in detail, offering suggestions for reducing errors that may affect the quality of results, summarizing the advantages and drawbacks of each approach.

  3. Ruminal metagenomic libraries as a source of relevant hemicellulolytic enzymes for biofuel production.

    Science.gov (United States)

    Duque, Estrella; Daddaoua, Abdelali; Cordero, Baldo F; Udaondo, Zulema; Molina-Santiago, Carlos; Roca, Amalia; Solano, Jennifer; Molina-Alcaide, Eduarda; Segura, Ana; Ramos, Juan-Luis

    2018-04-17

    The success of second-generation (2G) ethanol technology relies on the efficient transformation of hemicellulose into monosaccharides and, particularly, on the full conversion of xylans into xylose for over 18% of fermentable sugars. We sought new hemicellulases using ruminal liquid, after enrichment of microbes with industrial lignocellulosic substrates and preparation of metagenomic libraries. Among 150 000 fosmid clones tested, we identified 22 clones with endoxylanase activity and 125 with β-xylosidase activity. These positive clones were sequenced en masse, and the analysis revealed open reading frames with a low degree of similarity with known glycosyl hydrolases families. Among them, we searched for enzymes that were thermostable (activity at > 50°C) and that operate at high rate at pH around 5. Upon a wide series of assays, the clones exhibiting the highest endoxylanase and β-xylosidase activities were identified. The fosmids were sequenced, and the corresponding genes cloned, expressed and proteins purified. We found that the activity of the most active β-xylosidase was at least 10-fold higher than that in commercial enzymatic fungal cocktails. Endoxylanase activity was in the range of fungal enzymes. Fungal enzymatic cocktails supplemented with the bacterial hemicellulases exhibited enhanced release of sugars from pretreated sugar cane straw, a relevant agricultural residue. © 2018 The Authors. Microbial Biotechnology published by John Wiley & Sons Ltd and Society for Applied Microbiology.

  4. Einstein's error

    International Nuclear Information System (INIS)

    Winterflood, A.H.

    1980-01-01

    In discussing Einstein's Special Relativity theory it is claimed that it violates the principle of relativity itself and that an anomalous sign in the mathematics is found in the factor which transforms one inertial observer's measurements into those of another inertial observer. The apparent source of this error is discussed. Having corrected the error a new theory, called Observational Kinematics, is introduced to replace Einstein's Special Relativity. (U.K.)

  5. Sample presentation, sources of error and future perspectives on the application of vibrational spectroscopy in the wine industry.

    Science.gov (United States)

    Cozzolino, Daniel

    2015-03-30

    Vibrational spectroscopy encompasses a number of techniques and methods including ultra-violet, visible, Fourier transform infrared or mid infrared, near infrared and Raman spectroscopy. The use and application of spectroscopy generates spectra containing hundreds of variables (absorbances at each wavenumbers or wavelengths), resulting in the production of large data sets representing the chemical and biochemical wine fingerprint. Multivariate data analysis techniques are then required to handle the large amount of data generated in order to interpret the spectra in a meaningful way in order to develop a specific application. This paper focuses on the developments of sample presentation and main sources of error when vibrational spectroscopy methods are applied in wine analysis. Recent and novel applications will be discussed as examples of these developments. © 2014 Society of Chemical Industry.

  6. A theory of human error

    Science.gov (United States)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1981-01-01

    Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  7. Consumer exposure to biocides - identification of relevant sources and evaluation of possible health effects

    Directory of Open Access Journals (Sweden)

    Heger Wolfgang

    2010-02-01

    Full Text Available Abstract Background Products containing biocides are used for a variety of purposes in the home environment. To assess potential health risks, data on products containing biocides were gathered by means of a market survey, exposures were estimated using a worst case scenario approach (screening, the hazard of the active components were evaluated, and a preliminary risk assessment was conducted. Methods Information on biocide-containing products was collected by on-site research, by an internet inquiry as well as research into databases and lists of active substances. Twenty active substances were selected for detailed investigation. The products containing these substances were subsequently classified by range of application; typical concentrations were derived. Potential exposures were then estimated using a worst case scenario approach according to the European Commission's Technical Guidance Document on Risk Assessment. Relevant combinations of scenarios and active substances were identified. The toxicological data for these substances were compiled in substance dossiers. For estimating risks, the margins of exposure (MOEs were determined. Results Numerous consumer products were found to contain biocides. However, it appeared that only a limited number of biocidal active substances or groups of biocidal active substances were being used. The lowest MOEs for dermal exposure or exposure by inhalation were obtained for the following scenarios and biocides: indoor pest control using sprays, stickers or evaporators (chlorpyrifos, dichlorvos and spraying of disinfectants as well as cleaning of surfaces with concentrates (hydrogen peroxide, formaldehyde, glutardialdehyde. The risk from aggregate exposure to individual biocides via different exposure scenarios was higher than the highest single exposure on average by a factor of three. From the 20 biocides assessed 10 had skin-sensitizing properties. The biocides isothiazolinone (mixture of 5-chloro

  8. Triplet excited States as a source of relevant (bio)chemical information.

    Science.gov (United States)

    Jiménez, M Consuelo; Miranda, Miguel A

    2014-01-01

    The properties of triplet excited states are markedly medium-dependent, which turns this species into valuable tools for investigating the microenvironments existing in protein binding pockets. Monitoring of the triplet excited state behavior of drugs within transport proteins (serum albumins and α1-acid glycoproteins) by laser flash photolysis constitutes a valuable source of information on the strength of interaction, conformational freedom and protection from oxygen or other external quenchers. With proteins, formation of spatially confined triplet excited states is favored over competitive processes affording ionic species. Remarkably, under aerobic atmosphere, the triplet decay of drug@protein complexes is dramatically longer than in bulk solution. This offers a convenient dynamic range for assignment of different triplet populations or for stereochemical discrimination. In this review, selected examples of the application of the laser flash photolysis technique are described, including drug distribution between the bulk solution and the protein cavities, or between two types of proteins, detection of drug-drug interactions inside proteins, and enzyme-like activity processes mediated by proteins. Finally, protein encapsulation can also modify the photoreactivity of the guest. This is illustrated by presenting an example of retarded photooxidation.

  9. On the relevance of source effects in geomagnetic pulsations for induction soundings

    Science.gov (United States)

    Neska, Anne; Tadeusz Reda, Jan; Leszek Neska, Mariusz; Petrovich Sumaruk, Yuri

    2018-03-01

    This study is an attempt to close a gap between recent research on geomagnetic pulsations and their usage as source signals in electromagnetic induction soundings (i.e., magnetotellurics, geomagnetic depth sounding, and magnetovariational sounding). The plane-wave assumption as a precondition for the proper performance of these methods is partly violated by the local nature of field line resonances which cause a considerable portion of pulsations at mid latitudes. It is demonstrated that and explained why in spite of this, the application of remote reference stations in quasi-global distances for the suppression of local correlated-noise effects in induction arrows is possible in the geomagnetic pulsation range. The important role of upstream waves and of the magnetic equatorial region for such applications is emphasized. Furthermore, the principal difference between application of reference stations for local transfer functions (which result in sounding curves and induction arrows) and for inter-station transfer functions is considered. The preconditions for the latter are much stricter than for the former. Hence a failure to estimate an inter-station transfer function to be interpreted in terms of electromagnetic induction, e.g., because of field line resonances, does not necessarily prohibit use of the station pair for a remote reference estimation of the impedance tensor.

  10. On the relevance of source effects in geomagnetic pulsations for induction soundings

    Directory of Open Access Journals (Sweden)

    A. Neska

    2018-03-01

    Full Text Available This study is an attempt to close a gap between recent research on geomagnetic pulsations and their usage as source signals in electromagnetic induction soundings (i.e., magnetotellurics, geomagnetic depth sounding, and magnetovariational sounding. The plane-wave assumption as a precondition for the proper performance of these methods is partly violated by the local nature of field line resonances which cause a considerable portion of pulsations at mid latitudes. It is demonstrated that and explained why in spite of this, the application of remote reference stations in quasi-global distances for the suppression of local correlated-noise effects in induction arrows is possible in the geomagnetic pulsation range. The important role of upstream waves and of the magnetic equatorial region for such applications is emphasized. Furthermore, the principal difference between application of reference stations for local transfer functions (which result in sounding curves and induction arrows and for inter-station transfer functions is considered. The preconditions for the latter are much stricter than for the former. Hence a failure to estimate an inter-station transfer function to be interpreted in terms of electromagnetic induction, e.g., because of field line resonances, does not necessarily prohibit use of the station pair for a remote reference estimation of the impedance tensor.

  11. Locating single-point sources from arrival times containing large picking errors (LPEs): the virtual field optimization method (VFOM)

    Science.gov (United States)

    Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun

    2016-01-01

    Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.

  12. Standard Practice for Minimizing Dosimetry Errors in Radiation Hardness Testing of Silicon Electronic Devices Using Co-60 Sources

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This practice covers recommended procedures for the use of dosimeters, such as thermoluminescent dosimeters (TLD's), to determine the absorbed dose in a region of interest within an electronic device irradiated using a Co-60 source. Co-60 sources are commonly used for the absorbed dose testing of silicon electronic devices. Note 1—This absorbed-dose testing is sometimes called “total dose testing” to distinguish it from “dose rate testing.” Note 2—The effects of ionizing radiation on some types of electronic devices may depend on both the absorbed dose and the absorbed dose rate; that is, the effects may be different if the device is irradiated to the same absorbed-dose level at different absorbed-dose rates. Absorbed-dose rate effects are not covered in this practice but should be considered in radiation hardness testing. 1.2 The principal potential error for the measurement of absorbed dose in electronic devices arises from non-equilibrium energy deposition effects in the vicinity o...

  13. Estimating national crop yield potential and the relevance of weather data sources

    Science.gov (United States)

    Van Wart, Justin

    2011-12-01

    To determine where, when, and how to increase yields, researchers often analyze the yield gap (Yg), the difference between actual current farm yields and crop yield potential. Crop yield potential (Yp) is the yield of a crop cultivar grown under specific management limited only by temperature and solar radiation and also by precipitation for water limited yield potential (Yw). Yp and Yw are critical components of Yg estimations, but are very difficult to quantify, especially at larger scales because management data and especially daily weather data are scarce. A protocol was developed to estimate Yp and Yw at national scales using site-specific weather, soils and management data. Protocol procedures and inputs were evaluated to determine how to improve accuracy of Yp, Yw and Yg estimates. The protocol was also used to evaluate raw, site-specific and gridded weather database sources for use in simulations of Yp or Yw. The protocol was applied to estimate crop Yp in US irrigated maize and Chinese irrigated rice and Yw in US rainfed maize and German rainfed wheat. These crops and countries account for >20% of global cereal production. The results have significant implications for past and future studies of Yp, Yw and Yg. Accuracy of national long-term average Yp and Yw estimates was significantly improved if (i) > 7 years of simulations were performed for irrigated and > 15 years for rainfed sites, (ii) > 40% of nationally harvested area was within 100 km of all simulation sites, (iii) observed weather data coupled with satellite derived solar radiation data were used in simulations, and (iv) planting and harvesting dates were specified within +/- 7 days of farmers actual practices. These are much higher standards than have been applied in national estimates of Yp and Yw and this protocol is a substantial step in making such estimates more transparent, robust, and straightforward. Finally, this protocol may be a useful tool for understanding yield trends and directing

  14. European Legislation to Prevent Loss of Control of Sources and to Recover Orphan Sources, and Other Requirements Relevant to the Scrap Metal Industry

    Energy Technology Data Exchange (ETDEWEB)

    Janssens, A.; Tanner, V.; Mundigl, S., E-mail: augustin.janssens@ec.europa.eu [European Commission (Luxembourg)

    2011-07-15

    European legislation (Council Directive 2003/122/EURATOM) has been adopted with regard to the control of high-activity sealed radioactive sources (HASS). This Directive is now part of an overall recast of current radiation protection legislation. At the same time the main Directive, 96/29/EURATOM, laying down Basic Safety Standards (BSS) for the health protection of the general public and workers against the dangers of ionizing radiation, is being revised in the light of the new recommendations of the International Commission on Radiological Protection (ICRP). The provisions for exemption and clearance are a further relevant feature of the new BSS. The current issues emerging from the revision and recast of the BSS are discussed, in the framework of the need to protect the scrap metal industry from orphan sources and to manage contaminated metal products. (author)

  15. Planned upgrade to the coaxial plasma source facility for high heat flux plasma flows relevant to tokamak disruption simulations

    International Nuclear Information System (INIS)

    Caress, R.W.; Mayo, R.M.; Carter, T.A.

    1995-01-01

    Plasma disruptions in tokamaks remain serious obstacles to the demonstration of economical fusion power. In disruption simulation experiments, some important effects have not been taken into account. Present disruption simulation experimental data do not include effects of the high magnetic fields expected near the PFCs in a tokamak major disruption. In addition, temporal and spatial scales are much too short in present simulation devices to be of direct relevance to tokamak disruptions. To address some of these inadequacies, an experimental program is planned at North Carolina State University employing an upgrade to the Coaxial Plasma Source (CPS-1) magnetized coaxial plasma gun facility. The advantages of the CPS-1 plasma source over present disruption simulation devices include the ability to irradiate large material samples at extremely high areal energy densities, and the ability to perform these material studies in the presence of a high magnetic field. Other tokamak disruption relevant features of CPS-1U include a high ion temperature, high electron temperature, and long pulse length

  16. CellBase, a comprehensive collection of RESTful web services for retrieving relevant biological information from heterogeneous sources.

    Science.gov (United States)

    Bleda, Marta; Tarraga, Joaquin; de Maria, Alejandro; Salavert, Francisco; Garcia-Alonso, Luz; Celma, Matilde; Martin, Ainoha; Dopazo, Joaquin; Medina, Ignacio

    2012-07-01

    During the past years, the advances in high-throughput technologies have produced an unprecedented growth in the number and size of repositories and databases storing relevant biological data. Today, there is more biological information than ever but, unfortunately, the current status of many of these repositories is far from being optimal. Some of the most common problems are that the information is spread out in many small databases; frequently there are different standards among repositories and some databases are no longer supported or they contain too specific and unconnected information. In addition, data size is increasingly becoming an obstacle when accessing or storing biological data. All these issues make very difficult to extract and integrate information from different sources, to analyze experiments or to access and query this information in a programmatic way. CellBase provides a solution to the growing necessity of integration by easing the access to biological data. CellBase implements a set of RESTful web services that query a centralized database containing the most relevant biological data sources. The database is hosted in our servers and is regularly updated. CellBase documentation can be found at http://docs.bioinfo.cipf.es/projects/cellbase.

  17. Limit of detection in the presence of instrumental and non-instrumental errors: study of the possible sources of error and application to the analysis of 41 elements at trace levels by inductively coupled plasma-mass spectrometry technique

    International Nuclear Information System (INIS)

    Badocco, Denis; Lavagnini, Irma; Mondin, Andrea; Tapparo, Andrea; Pastore, Paolo

    2015-01-01

    In this paper the detection limit was estimated when signals were affected by two error contributions, namely instrumental errors and operational-non-instrumental errors. The detection limit was theoretically obtained following the hypothesis testing schema implemented with the calibration curve methodology. The experimental calibration design was based on J standards measured I times with non-instrumental errors affecting each standard systematically but randomly among the J levels. A two-component variance regression was performed to determine the calibration curve and to define the detection limit in these conditions. The detection limit values obtained from the calibration at trace levels of 41 elements by ICP-MS resulted larger than those obtainable from a one component variance regression. The role of the reagent impurities on the instrumental errors was ascertained and taken into account. Environmental pollution was studied as source of non-instrumental errors. The environmental pollution role was evaluated by Principal Component Analysis technique (PCA) applied to a series of nine calibrations performed in fourteen months. The influence of the seasonality of the environmental pollution on the detection limit was evidenced for many elements usually present in the urban air particulate. The obtained results clearly indicated the need of using the two-component variance regression approach for the calibration of all the elements usually present in the environment at significant concentration levels. - Highlights: • Limit of detection was obtained considering a two variance component regression. • Calibration data may be affected by instrumental and operational conditions errors. • Calibration model was applied to determine 41 elements at trace level by ICP-MS. • Non instrumental errors were evidenced by PCA analysis

  18. Analysis of positron annihilation lifetime data by numerical Laplace inversion: Corrections for source terms and zero-time shift errors

    International Nuclear Information System (INIS)

    Gregory, R.B.

    1991-01-01

    We have recently described modifications to the program CONTIN for the solution of Fredholm integral equations with convoluted kernels of the type that occur in the analysis of positron annihilation lifetime data. In this article, modifications to the program to correct for source terms in the sample and reference decay curves and for shifts in the position of the zero-time channel of the sample and reference data are described. Unwanted source components, expressed as a discrete sum of exponentials, may be removed from both the sample and reference data by modification of the sample data alone, without the need for direct knowledge of the instrument resolution function. Shifts in the position of the zero-time channel of up to half the channel width of the multichannel analyzer can be corrected. Analyses of computer-simulated test data indicate that the quality of the reconstructed annihilation rate probability density functions is improved by employing a refernce material with a short lifetime and indicate that reference materials which generate free positrons by quenching positronium formation (i.e. strong oxidizing agents) have lifetimes that are too long (400-450 ps) to provide reliable estimates of the lifetime parameters for the shortlived components with the methods described here. Well-annealed single crystals of metals with lifetimes less than 200 ps, such as molybdenum (123 ps) and aluminium (166 ps) do not introduce significant errors in estimates of the lifetime parameters and are to be preferred as reference materials. The performance of our modified version of CONTIN is illustrated by application to positron annihilation in polytetrafluoroethylene. (orig.)

  19. Source credibility and idea improvement have independent effects on unconscious plagiarism errors in recall and generate-new tasks.

    Science.gov (United States)

    Perfect, Timothy J; Field, Ian; Jones, Robert

    2009-01-01

    Unconscious plagiarism occurs when people try to generate new ideas or when they try to recall their own ideas from among a set generated by a group. In this study, the factors that independently influence these two forms of plagiarism error were examined. Participants initially generated solutions to real-world problems in 2 domains of knowledge in collaboration with a confederate presented as an expert in 1 domain. Subsequently, the participant generated improvements to half of the ideas from each person. Participants returned 1 day later to recall either their own ideas or their partner's ideas and to complete a generate-new task. A double dissociation was observed. Generate-new plagiarism was driven by partner expertise but not by idea improvement, whereas recall plagiarism was driven by improvement but not expertise. This improvement effect on recall plagiarism was seen for the recall-own but not the recall-partner task, suggesting that the increase in recall-own plagiarism is due to mistaken idea ownership, not source confusion.

  20. Factory-discharged pharmaceuticals could be a relevant source of aquatic environment contamination: review of evidence and need for knowledge.

    Science.gov (United States)

    Cardoso, Olivier; Porcher, Jean-Marc; Sanchez, Wilfried

    2014-11-01

    Human and veterinary active pharmaceutical ingredients (APIs) are involved in contamination of surface water, ground water, effluents, sediments and biota. Effluents of waste water treatment plants and hospitals are considered as major sources of such contamination. However, recent evidences reveal high concentrations of a large number of APIs in effluents from pharmaceutical factories and in receiving aquatic ecosystems. Moreover, laboratory exposures to these effluents and field experiments reveal various physiological disturbances in exposed aquatic organisms. Also, it seems to be relevant to increase knowledge on this route of contamination but also to develop specific approaches for further environmental monitoring campaigns. The present study summarizes available data related to the impact of pharmaceutical factory discharges on aquatic ecosystem contaminations and presents associated challenges for scientists and environmental managers. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. On the meniscus formation and the negative hydrogen ion extraction from ITER neutral beam injection relevant ion source

    International Nuclear Information System (INIS)

    Mochalskyy, S; Wünderlich, D; Ruf, B; Fantz, U; Franzen, P; Minea, T

    2014-01-01

    The development of a large area (A source,ITER  = 0.9 × 2 m 2 ) hydrogen negative ion (NI) source constitutes a crucial step in construction of the neutral beam injectors of the international fusion reactor ITER. To understand the plasma behaviour in the boundary layer close to the extraction system the 3D PIC MCC code ONIX is exploited. Direct cross checked analysis of the simulation and experimental results from the ITER-relevant BATMAN source testbed with a smaller area (A source,BATMAN  ≈ 0.32 × 0.59 m 2 ) has been conducted for a low perveance beam, but for a full set of plasma parameters available. ONIX has been partially benchmarked by comparison to the results obtained using the commercial particle tracing code for positive ion extraction KOBRA3D. Very good agreement has been found in terms of meniscus position and its shape for simulations of different plasma densities. The influence of the initial plasma composition on the final meniscus structure was then investigated for NIs. As expected from the Child–Langmuir law, the results show that not only does the extraction potential play a crucial role on the meniscus formation, but also the initial plasma density and its electronegativity. For the given parameters, the calculated meniscus locates a few mm downstream of the plasma grid aperture provoking a direct NI extraction. Most of the surface produced NIs do not reach the plasma bulk, but move directly towards the extraction grid guided by the extraction field. Even for artificially increased electronegativity of the bulk plasma the extracted NI current from this region is low. This observation indicates a high relevance of the direct NI extraction. These calculations show that the extracted NI current from the bulk region is low even if a complete ion–ion plasma is assumed, meaning that direct extraction from surface produced ions should be present in order to obtain sufficiently high extracted NI current density. The calculated

  2. Nuclear weapon relevant materials and preventive arms control. Uranium-free fuels for plutonium elimination and spallation neutron sources

    International Nuclear Information System (INIS)

    Liebert, Wolfgang; Englert, Matthias; Pistner, Christoph

    2009-01-01

    technological challenges of nuclear non-proliferation, which are directly connected with the central role of weapon-relevant materials, and it is trying to present practical solutions on a technical basis: - Discover paths for the disposal of existing amounts of nuclear weapon-relevant materials elaborating on the example of technically-based plutonium disposal options: central technical questions of the possible use of uranium-free inert matrix fuel (IMF) in currently used light water reactors will be addressed in order to clarify which advantages or disadvantages do exist in comparison to other disposal options. The investigation is limited on the comparison with one other reactor-based option, the use of uranium-plutonium mixed-oxide (MOX) fuels. - Analysis of proliferation relevant potentials of new nuclear technologies (accessibility of weapon materials): Exemplary investigation of spallation neutron sources in order to improve this technology by a more proliferation resistant shaping. Although they are obviously capable to breed nuclear weapon-relevant materials like plutonium, uranium-233 or tritium, there is no comprehensive analysis of nonproliferation aspects of spallation neutron sources up to now. Both project parts provide not only contributions to the concept of preventive arms control but also to the shaping of technologies, which is oriented towards the criteria of proliferation resistance.

  3. Sourcing of an alternative pericyte-like cell type from peripheral blood in clinically relevant numbers for therapeutic angiogenic applications.

    Science.gov (United States)

    Blocki, Anna; Wang, Yingting; Koch, Maria; Goralczyk, Anna; Beyer, Sebastian; Agarwal, Nikita; Lee, Michelle; Moonshi, Shehzahdi; Dewavrin, Jean-Yves; Peh, Priscilla; Schwarz, Herbert; Bhakoo, Kishore; Raghunath, Michael

    2015-03-01

    Autologous cells hold great potential for personalized cell therapy, reducing immunological and risk of infections. However, low cell counts at harvest with subsequently long expansion times with associated cell function loss currently impede the advancement of autologous cell therapy approaches. Here, we aimed to source clinically relevant numbers of proangiogenic cells from an easy accessible cell source, namely peripheral blood. Using macromolecular crowding (MMC) as a biotechnological platform, we derived a novel cell type from peripheral blood that is generated within 5 days in large numbers (10-40 million cells per 100 ml of blood). This blood-derived angiogenic cell (BDAC) type is of monocytic origin, but exhibits pericyte markers PDGFR-β and NG2 and demonstrates strong angiogenic activity, hitherto ascribed only to MSC-like pericytes. Our findings suggest that BDACs represent an alternative pericyte-like cell population of hematopoietic origin that is involved in promoting early stages of microvasculature formation. As a proof of principle of BDAC efficacy in an ischemic disease model, BDAC injection rescued affected tissues in a murine hind limb ischemia model by accelerating and enhancing revascularization. Derived from a renewable tissue that is easy to collect, BDACs overcome current short-comings of autologous cell therapy, in particular for tissue repair strategies.

  4. A multi-sensor burned area algorithm for crop residue burning in northwestern India: validation and sources of error

    Science.gov (United States)

    Liu, T.; Marlier, M. E.; Karambelas, A. N.; Jain, M.; DeFries, R. S.

    2017-12-01

    A leading source of outdoor emissions in northwestern India comes from crop residue burning after the annual monsoon (kharif) and winter (rabi) crop harvests. Agricultural burned area, from which agricultural fire emissions are often derived, can be poorly quantified due to the mismatch between moderate-resolution satellite sensors and the relatively small size and short burn period of the fires. Many previous studies use the Global Fire Emissions Database (GFED), which is based on the Moderate Resolution Imaging Spectroradiometer (MODIS) burned area product MCD64A1, as an outdoor fires emissions dataset. Correction factors with MODIS active fire detections have previously attempted to account for small fires. We present a new burned area classification algorithm that leverages more frequent MODIS observations (500 m x 500 m) with higher spatial resolution Landsat (30 m x 30 m) observations. Our approach is based on two-tailed Normalized Burn Ratio (NBR) thresholds, abbreviated as ModL2T NBR, and results in an estimated 104 ± 55% higher burned area than GFEDv4.1s (version 4, MCD64A1 + small fires correction) in northwestern India during the 2003-2014 winter (October to November) burning seasons. Regional transport of winter fire emissions affect approximately 63 million people downwind. The general increase in burned area (+37% from 2003-2007 to 2008-2014) over the study period also correlates with increased mechanization (+58% in combine harvester usage from 2001-2002 to 2011-2012). Further, we find strong correlation between ModL2T NBR-derived burned area and results of an independent survey (r = 0.68) and previous studies (r = 0.92). Sources of error arise from small median landholding sizes (1-3 ha), heterogeneous spatial distribution of two dominant burning practices (partial and whole field), coarse spatio-temporal satellite resolution, cloud and haze cover, and limited Landsat scene availability. The burned area estimates of this study can be used to build

  5. On the meniscus formation and the negative hydrogen ion extraction from ITER neutral beam injection relevant ion source

    Science.gov (United States)

    Mochalskyy, S.; Wünderlich, D.; Ruf, B.; Fantz, U.; Franzen, P.; Minea, T.

    2014-10-01

    The development of a large area (Asource,ITER = 0.9 × 2 m2) hydrogen negative ion (NI) source constitutes a crucial step in construction of the neutral beam injectors of the international fusion reactor ITER. To understand the plasma behaviour in the boundary layer close to the extraction system the 3D PIC MCC code ONIX is exploited. Direct cross checked analysis of the simulation and experimental results from the ITER-relevant BATMAN source testbed with a smaller area (Asource,BATMAN ≈ 0.32 × 0.59 m2) has been conducted for a low perveance beam, but for a full set of plasma parameters available. ONIX has been partially benchmarked by comparison to the results obtained using the commercial particle tracing code for positive ion extraction KOBRA3D. Very good agreement has been found in terms of meniscus position and its shape for simulations of different plasma densities. The influence of the initial plasma composition on the final meniscus structure was then investigated for NIs. As expected from the Child-Langmuir law, the results show that not only does the extraction potential play a crucial role on the meniscus formation, but also the initial plasma density and its electronegativity. For the given parameters, the calculated meniscus locates a few mm downstream of the plasma grid aperture provoking a direct NI extraction. Most of the surface produced NIs do not reach the plasma bulk, but move directly towards the extraction grid guided by the extraction field. Even for artificially increased electronegativity of the bulk plasma the extracted NI current from this region is low. This observation indicates a high relevance of the direct NI extraction. These calculations show that the extracted NI current from the bulk region is low even if a complete ion-ion plasma is assumed, meaning that direct extraction from surface produced ions should be present in order to obtain sufficiently high extracted NI current density. The calculated extracted currents, both ions

  6. Modeling systematic errors: polychromatic sources of Beer-Lambert deviations in HPLC/UV and nonchromatographic spectrophotometric assays.

    Science.gov (United States)

    Galli, C

    2001-07-01

    It is well established that the use of polychromatic radiation in spectrophotometric assays leads to excursions from the Beer-Lambert limit. This Note models the resulting systematic error as a function of assay spectral width, slope of molecular extinction coefficient, and analyte concentration. The theoretical calculations are compared with recent experimental results; a parameter is introduced which can be used to estimate the magnitude of the systematic error in both chromatographic and nonchromatographic spectrophotometric assays. It is important to realize that the polychromatic radiation employed in common laboratory equipment can yield assay errors up to approximately 4%, even at absorption levels generally considered 'safe' (i.e. absorption <1). Thus careful consideration of instrumental spectral width, analyte concentration, and slope of molecular extinction coefficient is required to ensure robust analytical methods.

  7. Reducing errors in aircraft atmospheric inversion estimates of point-source emissions: the Aliso Canyon natural gas leak as a natural tracer experiment

    Science.gov (United States)

    Gourdji, S. M.; Yadav, V.; Karion, A.; Mueller, K. L.; Conley, S.; Ryerson, T.; Nehrkorn, T.; Kort, E. A.

    2018-04-01

    Urban greenhouse gas (GHG) flux estimation with atmospheric measurements and modeling, i.e. the ‘top-down’ approach, can potentially support GHG emission reduction policies by assessing trends in surface fluxes and detecting anomalies from bottom-up inventories. Aircraft-collected GHG observations also have the potential to help quantify point-source emissions that may not be adequately sampled by fixed surface tower-based atmospheric observing systems. Here, we estimate CH4 emissions from a known point source, the Aliso Canyon natural gas leak in Los Angeles, CA from October 2015–February 2016, using atmospheric inverse models with airborne CH4 observations from twelve flights ≈4 km downwind of the leak and surface sensitivities from a mesoscale atmospheric transport model. This leak event has been well-quantified previously using various methods by the California Air Resources Board, thereby providing high confidence in the mass-balance leak rate estimates of (Conley et al 2016), used here for comparison to inversion results. Inversions with an optimal setup are shown to provide estimates of the leak magnitude, on average, within a third of the mass balance values, with remaining errors in estimated leak rates predominantly explained by modeled wind speed errors of up to 10 m s‑1, quantified by comparing airborne meteorological observations with modeled values along the flight track. An inversion setup using scaled observational wind speed errors in the model-data mismatch covariance matrix is shown to significantly reduce the influence of transport model errors on spatial patterns and estimated leak rates from the inversions. In sum, this study takes advantage of a natural tracer release experiment (i.e. the Aliso Canyon natural gas leak) to identify effective approaches for reducing the influence of transport model error on atmospheric inversions of point-source emissions, while suggesting future potential for integrating surface tower and

  8. Upper Devonian (Frasnian) non-calcified, algae, Alberta: Geological relevance to Leduc platforms and petroleum source rocks

    Energy Technology Data Exchange (ETDEWEB)

    Dix, G.R. (Univ. of British Columbia, Vancouver, BC (Canada))

    1990-12-01

    Several types of non-calcified fossil algae comparable to extant brown and green benithic macrophytes occur abundantly on two bedding planes in drill core from argillaceous slope carbonates of the Ireton Formation in northern Alberta. Fossiliferous strata abruptly overlie part of a stepped-back margin of the Sturgeon Lake carbonate platform (Leduc Formation), southeast of the Peace River Arch. Fossils are flattened organic fragments, some representing nearly complete specimens. Tentative comparisons are made with some Paleozoic algae; some of the Sturgeon Lake flora may be new species or genera. Preliminary examination of selected cores from the Ireton Formation and organic-rich Duvernay Formation in central Alberta indicates a widespread distribution of algal-derived organic matter within Upper Devonian basinal strata. The geological relevance of non-calcified algae to Devonian carbonate platforms and basins is postulated in three cases. Their presence in slope sediments may indicate that algal lawns flourished in muddy, upper slope environments. Fossils accumulated either in situ, or were ripped up and quickly buried within downshope resedimented deposits. All or some algal fragments may have been swept from the adjacent carbonate platform during storms. Prolific shallow water algal growth may have occurred simultaneously with oceanic crises when shallow water carbonate production either decreased or was shut down. The present position of fossil algae, therefore, would mark a bedding surface that is stratigraphically equivalent to an intraplatform disconformity. Regardless of the original environment, a sufficient accumulation of non-calcified algae in slope strata represents a viable petroleum source proximal to carbonate platforms. 46 refs., 9 figs.

  9. Error sources in the retrieval of aerosol information over bright surfaces from satellite measurements in the oxygen A band

    Science.gov (United States)

    Nanda, Swadhin; de Graaf, Martin; Sneep, Maarten; de Haan, Johan F.; Stammes, Piet; Sanders, Abram F. J.; Tuinder, Olaf; Pepijn Veefkind, J.; Levelt, Pieternel F.

    2018-01-01

    Retrieving aerosol optical thickness and aerosol layer height over a bright surface from measured top-of-atmosphere reflectance spectrum in the oxygen A band is known to be challenging, often resulting in large errors. In certain atmospheric conditions and viewing geometries, a loss of sensitivity to aerosol optical thickness has been reported in the literature. This loss of sensitivity has been attributed to a phenomenon known as critical surface albedo regime, which is a range of surface albedos for which the top-of-atmosphere reflectance has minimal sensitivity to aerosol optical thickness. This paper extends the concept of critical surface albedo for aerosol layer height retrievals in the oxygen A band, and discusses its implications. The underlying physics are introduced by analysing the top-of-atmosphere reflectance spectrum as a sum of atmospheric path contribution and surface contribution, obtained using a radiative transfer model. Furthermore, error analysis of an aerosol layer height retrieval algorithm is conducted over dark and bright surfaces to show the dependence on surface reflectance. The analysis shows that the derivative with respect to aerosol layer height of the atmospheric path contribution to the top-of-atmosphere reflectance is opposite in sign to that of the surface contribution - an increase in surface brightness results in a decrease in information content. In the case of aerosol optical thickness, these derivatives are anti-correlated, leading to large retrieval errors in high surface albedo regimes. The consequence of this anti-correlation is demonstrated with measured spectra in the oxygen A band from the GOME-2 instrument on board the Metop-A satellite over the 2010 Russian wildfires incident.

  10. Medication Errors: New EU Good Practice Guide on Risk Minimisation and Error Prevention.

    Science.gov (United States)

    Goedecke, Thomas; Ord, Kathryn; Newbould, Victoria; Brosch, Sabine; Arlett, Peter

    2016-06-01

    A medication error is an unintended failure in the drug treatment process that leads to, or has the potential to lead to, harm to the patient. Reducing the risk of medication errors is a shared responsibility between patients, healthcare professionals, regulators and the pharmaceutical industry at all levels of healthcare delivery. In 2015, the EU regulatory network released a two-part good practice guide on medication errors to support both the pharmaceutical industry and regulators in the implementation of the changes introduced with the EU pharmacovigilance legislation. These changes included a modification of the 'adverse reaction' definition to include events associated with medication errors, and the requirement for national competent authorities responsible for pharmacovigilance in EU Member States to collaborate and exchange information on medication errors resulting in harm with national patient safety organisations. To facilitate reporting and learning from medication errors, a clear distinction has been made in the guidance between medication errors resulting in adverse reactions, medication errors without harm, intercepted medication errors and potential errors. This distinction is supported by an enhanced MedDRA(®) terminology that allows for coding all stages of the medication use process where the error occurred in addition to any clinical consequences. To better understand the causes and contributing factors, individual case safety reports involving an error should be followed-up with the primary reporter to gather information relevant for the conduct of root cause analysis where this may be appropriate. Such reports should also be summarised in periodic safety update reports and addressed in risk management plans. Any risk minimisation and prevention strategy for medication errors should consider all stages of a medicinal product's life-cycle, particularly the main sources and types of medication errors during product development. This article

  11. A dozen useful tips on how to minimise the influence of sources of error in quantitative electron paramagnetic resonance (EPR) spectroscopy-A review

    International Nuclear Information System (INIS)

    Mazur, Milan

    2006-01-01

    The principal and the most important error sources in quantitative electron paramagnetic resonance (EPR) measurements arising from sample-associated factors are the influence of the variation of the sample material (dielectric constant), sample size and shape, sample tube wall thickness, and sample orientation and positioning within the microwave cavity on the EPR signal intensity. Variation in these parameters can cause significant and serious errors in the primary phase of quantitative EPR analysis (i.e., data acquisition). The primary aim of this review is to provide useful suggestions, recommendations and simple procedures to minimise the influence of such primary error sources in quantitative EPR measurements. According to the literature, as well as results obtained in our EPR laboratory, the following are recommendations for samples, which are compared in quantitative EPR studies: (i) the shape of all samples should be identical; (ii) the position of the sample/reference in the cavity should be identical; (iii) a special alignment procedure for precise sample positioning within the cavity should be adopted; (iv) a special/consistent procedure for sample packing for a powder material should be used; (v) the wall thickness of sample tubes should be identical; (vi) the shape and wall thickness of quartz Dewars, where used, should be identical; (vii) where possible a double TE 104 cavity should be used in quantitative EPR spectroscopy; (viii) the dielectric properties of unknown and standard samples should be as close as possible; (ix) sample length less than double the cavity length should be used; (x) the optimised sample geometry for the X-band cavity is a 30 mm-length capillary with i.d. less then 1.5 mm; (xi) use of commercially distributed software for post-recording spectra manipulation is a basic necessity; and (xii) the sample and laboratory temperature should be kept constant during measurements. When the above recommendations and procedures were used

  12. Wastewood as a source of woodfuel: The case of Malawi and its relevance to sub-Saharan Africa

    International Nuclear Information System (INIS)

    Teplitz Sembitzky, W.

    1991-01-01

    Land clearing and forest sector residues, notably the wastewood generated on large timber plantations, can provide a sizeable and hitherto neglected source of woodfuel. This article highlights experience in Malawi where wastewood from pine plantations is converted into charcoal that is sold to residential, industrial and agro-industrial users. Similar initiatives proposed in other countries of sub-Saharan Africa indicate that comprehensive utilization of wastewood resources could help to reduce regional and local imbalances. (author). 19 refs

  13. Calculating Error Percentage in Using Water Phantom Instead of Soft Tissue Concerning 103Pd Brachytherapy Source Distribution via Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    OL Ahmadi

    2015-12-01

    Full Text Available Introduction: 103Pd is a low energy source, which is used in brachytherapy. According to the standards of American Association of Physicists in Medicine, dosimetric parameters determination of brachytherapy sources before the clinical application was considered significantly important. Therfore, the present study aimed to compare the dosimetric parameters of the target source using the water phantom and soft tissue. Methods: According to the TG-43U1 protocol, the dosimetric parameters were compared around the 103Pd source in regard with water phantom with the density of 0.998 gr/cm3 and the soft tissue with the density of 1.04 gr/cm3 on the longitudinal and transverse axes using the MCNP4C code and the relative differences were compared between the both conditions. Results: The simulation results indicated that the dosimetric parameters depended on the radial dose function and the anisotropy function in the application of the water phantom instead of soft tissue up to a distance of 1.5 cm,  between which a good consistency was observed. With increasing the distance, the difference increased, so as within 6 cm from the source, this difference increased to 4%. Conclusions: The results of  the soft tissue phantom compared with those of the water phantom indicated 4% relative difference at a distance of 6 cm from the source. Therefore, the results of the water phantom with a maximum error of 4% can be used in practical applications instead of soft tissue. Moreover, the amount of differences obtained in each distance regarding using the soft tissue phantom could be corrected.

  14. Sources of error in etched-track radon measurements and a review of passive detectors using results from a series of radon intercomparisons

    International Nuclear Information System (INIS)

    Ibrahimi, Z.-F.; Howarth, C.B.; Miles, J.C.H.

    2009-01-01

    Etched-track passive radon detectors are a well established and apparently simple technology. As with any measurement system, there are multiple sources of uncertainty and potential for error. The authors discuss these as well as good quality assurance practices. Identification and assessment of sources of error is crucial to maintain high quality standards by a measurement laboratory. These sources can be found both within and outside the radon measurement laboratory itself. They can lead to changes in track characteristics and ultimately detector response to radon exposure. Changes don't just happen during etching, but can happen during the recording or counting of etched-tracks (for example ageing and fading effects on track sensitivity, or focus and image acquisition variables). Track overlap means the linearity of response of detectors will vary as exposure increases. The laboratory needs to correct the calibration curve due to this effect if it wishes to offer detectors that cover a range of exposures likely to be observed in the field. Extrapolation of results to estimate annual average concentrations also has uncertainty associated with it. Measurement systems need to be robust, reliable and stable. If a laboratory is not actively and constantly monitoring for anomalies via internal testing, the laboratory may not become aware of a problem until some form of external testing occurs, eg an accreditation process, performance test, interlaboratory comparison exercise or when a customer has cause to query results. Benchmark standards of accuracy and precision achievable with passive detectors are discussed drawing on trends from the series of intercomparison exercises for passive radon detectors which began in 1982, organised by the National Radiological Protection Board (NRPB), subsequently the Health Protection Agency (HPA).

  15. Banana (Musa spp) from peel to pulp: ethnopharmacology, source of bioactive compounds and its relevance for human health.

    Science.gov (United States)

    Pereira, Aline; Maraschin, Marcelo

    2015-02-03

    Banana is a fruit with nutritional properties and also with acclaimed therapeutic uses, cultivated widely throughout the tropics as source of food and income for people. Banana peel is known by its local and traditional use to promote wound healing mainly from burns and to help overcome or prevent a substantial number of illnesses, as depression. This review critically assessed the phytochemical properties and biological activities of Musa spp fruit pulp and peel. A survey on the literature on banana (Musa spp, Musaceae) covering its botanical classification and nomenclature, as well as the local and traditional use of its pulp and peel was performed. Besides, the current state of art on banana fruit pulp and peel as interesting complex matrices sources of high-value compounds from secondary metabolism was also approached. Dessert bananas and plantains are systematic classified into four sections, Eumusa, Rhodochlamys, Australimusa, and Callimusa, according to the number of chromosomes. The fruits differ only in their ploidy arrangement and a single scientific name can be given to all the edible bananas, i.e., Musa spp. The chemical composition of banana's peel and pulp comprise mostly carotenoids, phenolic compounds, and biogenic amines. The biological potential of those biomasses is directly related to their chemical composition, particularly as pro-vitamin A supplementation, as potential antioxidants attributed to their phenolic constituents, as well as in the treatment of Parkinson's disease considering their contents in l-dopa and dopamine. Banana's pulp and peel can be used as natural sources of antioxidants and pro-vitamin A due to their contents in carotenoids, phenolics, and amine compounds, for instance. For the development of a phytomedicine or even an allopathic medicine, e.g., banana fruit pulp and peel could be of interest as raw materials riches in beneficial bioactive compounds. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  16. Dynamics of a Z Pinch X Ray Source for Heating ICF Relevant Hohlraums to 120-160eV

    Energy Technology Data Exchange (ETDEWEB)

    SANFORD,THOMAS W. L.; OLSON,RICHARD E.; MOCK,RAYMOND CECIL; CHANDLER,GORDON A.; LEEPER,RAMON J.; NASH,THOMAS J.; RUGGLES,LAURENCE E.; SIMPSON,WALTER W.; STRUVE,KENNETH W.; PETERSON,D.L.; BOWERS,R.L.; MATUSKA,W.

    2000-07-10

    A z-pinch radiation source has been developed that generates 60 {+-} 20 KJ of x-rays with a peak power of 13 {+-} 4 TW through a 4-mm diameter axial aperture on the Z facility. The source has heated NIF (National Ignition Facility)-scale (6-mm diameter by 7-mm high) hohlraums to 122 {+-} 6 eV and reduced-scale (4-mm diameter by 4-mm high) hohlraums to 155 {+-} 8 eV -- providing environments suitable for indirect-drive ICF (Inertial Confinement Fusion) studies. Eulerian-RMHC (radiation-hydrodynamics code) simulations that take into account the development of the Rayleigh-Taylor instability in the r-z plane provide integrated calculations of the implosion, x-ray generation, and hohlraum heating, as well as estimates of wall motion and plasma fill within the hohlraums. Lagrangian-RMHC simulations suggest that the addition of a 6 mg/cm{sup 3} CH{sub 2} fill in the reduced-scale hohlraum decreases hohlraum inner-wall velocity by {approximately}40% with only a 3--5% decrease in peak temperature, in agreement with measurements.

  17. Reliability of published data on radionuclide half lives - relevance to the use of reference sources for checking instrument performance

    International Nuclear Information System (INIS)

    Waldock, P.M.

    1999-01-01

    Long-lived calibrated radioisotopes are frequently used for checking of instrumentation used in the measurement of radiation; examples include: radioisotope assay meters, radiation monitors and sample counting equipment. In 1986 we purchased a radioisotope calibrator (Capintec CRC120) which was supplied with a number of long-lived check sources by the manufacturer, one of which was barium-133. The source came with its own calibration certificate and a quoted half life of 10.74 years ± 0.05 years, traceable to the National Bureau of Standards in the USA, and is consistent with data published by the National Nuclear Data Center, Brookhaven National Laboratory in 1985 (Tuli 1985). However, we noted at the time that this is significantly different to the value of 7.2 years quoted in the Radiochemical Manual (Wilson 1966) published by the Radiochemical Centre, Amersham (now Nycomed-Amersham), and more recently we have noted that it is significantly different to the value of 10.53 years currently quoted on various Internet sites including the University of Sheffield Chemistry Department (Winter 1999). Further investigation showed similar or worse variations of published half lives with time for several radioisotopes. Letter-to-the-editor

  18. Dynamics of a Z Pinch X Ray Source for Heating ICF Relevant Hohlraums to 120-160eV

    International Nuclear Information System (INIS)

    Sanford, Thomas W.L.; Olson, Richard E.; Mock, Raymond Cecil; Chandler, Gordon A.; Leeper, Ramon J.; Nash, Thomas J.; Ruggles, Laurence E.; Simpson, Walter W.; Struve, Kenneth W.; Peterson, D.L.; Bowers, R.L.; Matuska, W.

    2000-01-01

    A z-pinch radiation source has been developed that generates 60 ± 20 KJ of x-rays with a peak power of 13 ± 4 TW through a 4-mm diameter axial aperture on the Z facility. The source has heated NIF (National Ignition Facility)-scale (6-mm diameter by 7-mm high) hohlraums to 122 ± 6 eV and reduced-scale (4-mm diameter by 4-mm high) hohlraums to 155 ± 8 eV -- providing environments suitable for indirect-drive ICF (Inertial Confinement Fusion) studies. Eulerian-RMHC (radiation-hydrodynamics code) simulations that take into account the development of the Rayleigh-Taylor instability in the r-z plane provide integrated calculations of the implosion, x-ray generation, and hohlraum heating, as well as estimates of wall motion and plasma fill within the hohlraums. Lagrangian-RMHC simulations suggest that the addition of a 6 mg/cm 3 CH 2 fill in the reduced-scale hohlraum decreases hohlraum inner-wall velocity by ∼40% with only a 3--5% decrease in peak temperature, in agreement with measurements

  19. Sustainable development relevant comparison of the greenhouse gas emissions from the full energy chains of different energy sources

    International Nuclear Information System (INIS)

    Van De Vate, J.F.

    1997-01-01

    It is emphasized that sustainable energy planning should account for the emissions of all greenhouse gases (GHGs) from the whole energy chain, hence accounting not only carbon dioxide as the greenhouse gas and not only for the emissions from the combustion of fossil fuels. Lowering greenhouse gas emissions from the worldwide energy use can be done most effectively by accounting in energy planning for the full-energy-chain (FENCH) emissions of all GHGs. Only energy sources with similar output can be compared. This study investigates electricity generating technologies, which are compared in terms their GHG emission factors to be expressed in CO 2 -equivalents per kW.h(e). Earlier IAEA expert meetings are reviewed. A general meeting made general recommendations about methods and input data bases for FENCH-GHG analysis. Two more recent meetings dealt with the energy chains of nuclear and hydropower. The site-specific character of the emission factors of these energy sources is discussed. Both electricity generators have emission factors in the range of 5-30 g CO 2 -equiv./kW.h(e), which is very low compared to the FENCH-GHG emission factors of fossil-fueled power generation and of most of the renewable power generators. (author)

  20. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  1. Error Patterns

    NARCIS (Netherlands)

    Hoede, C.; Li, Z.

    2001-01-01

    In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,

  2. The macro economic relevance of renewable energy sources for Switzerland; Volkswirtschaftliche Bedeutung erneuerbarer Energien fuer die Schweiz

    Energy Technology Data Exchange (ETDEWEB)

    Nathani, C.; Schmid, C.; Rieser, A.; Ruetter, H. [Ruetter und Partner, Rueschlikon (Switzerland); Bernath, K.; Felten, N. von [Ernst Basler und Partner, Zollikon (Switzerland); Walz, R.; Marscheider-Weidemann, F. [Fraunhofer Institut fuer System- und Innovationsforschung, Karlsruhe (Germany)

    2013-01-15

    This study analyses the economic relevance of renewable energy in Switzerland. In 2010 the enterprises in the renewable energy sector generated a gross value added of 4.8 bn CHF (equalling 0.9% of Swiss GDP). Employment in this sector approximated 22,800 fulltime jobs (0.6% of total Swiss employment). Including supply chain companies, 1.5% of Swiss GDP and 1.2% of total employment can be related to the use of renewable energy. Exports of renewable energy related goods and services equalled 3.2 bn CHF. Since 2000 the Swiss renewable energy sector has experienced an above-average annual growth of more than 4%. Its potential development until the year 2020 was studied with two scenarios. In the policy scenario, that assumes additional policy measures for renewable energy promotion, direct value added of the renewable energy sector would amount to 6.4 bn CHF (+33%), direct employment would increase to 29,200 fulltime jobs (+28%, gross effects resp.). In the more conservative baseline scenario, growth would be much weaker, but still slightly stronger than anticipated for the average economy. (authors)

  3. Recent experimental results on ICF target implosions by Z-pinch radiation sources and their relevance to ICF ignition studies

    International Nuclear Information System (INIS)

    Mehlhorn, T A; Bailey, J E; Bennett, G; Chandler, G A; Cooper, G; Cuneo, M E; Golovkin, I; Hanson, D L; Leeper, R J; MacFarlane, J J; Mancini, R C; Matzen, M K; Nash, T J; Olson, C L; Porter, J L; Ruiz, C L; Schroen, D G; Slutz, S A; Varnum, W; Vesey, R A

    2003-01-01

    Inertial confinement fusion capsule implosions absorbing up to 35 kJ of x-rays from a ∼220 eV dynamic hohlraum on the Z accelerator at Sandia National Laboratories have produced thermonuclear D-D neutron yields of (2.6±1.3) x 10 10 . Argon spectra confirm a hot fuel with T e ∼ 1 keV and n e ∼ (1-2) x 10 23 cm -3 . Higher performance implosions will require radiation symmetry control improvements. Capsule implosions in a ∼70 eV double-Z-pinch-driven secondary hohlraum have been radiographed by 6.7 keV x-rays produced by the Z-beamlet laser (ZBL), demonstrating a drive symmetry of about 3% and control of P 2 radiation asymmetries to ±2%. Hemispherical capsule implosions have also been radiographed in Z in preparation for future experiments in fast ignition physics. Z-pinch-driven inertial fusion energy concepts are being developed. The refurbished Z machine (ZR) will begin providing scaling information on capsule and Z-pinch in 2006. The addition of a short pulse capability to ZBL will enable research into fast ignition physics in the combination of ZR and ZBL-petawatt. ZR could provide a test bed to study NIF-relevant double-shell ignition concepts using dynamic hohlraums and advanced symmetry control techniques in the double-pinch hohlraum backlit by ZBL

  4. Recent experimental results on ICF target implosions by Z-pinch radiation sources and their relevance to ICF ignition studies

    International Nuclear Information System (INIS)

    Bailey, James E.; Chandler, Gordon Andrew; Vesey, Roger Alan; Hanson, David Lester; Olson, Craig Lee; Nash, Thomas J.; Matzen, Maurice Keith; Ruiz, Carlos L.; Porter, John Larry Jr.; Cuneo, Michael Edward; Varnum, William S.; Bennett, Guy R.; Cooper, Gary Wayne; Schroen, Diana Grace; Slutz, Stephen A.; MacFarlane, Joseph John; Leeper, Ramon Joe; Golovkin, I.E.; Mehlhorn, Thomas Alan; Mancini, Roberto Claudio

    2003-01-01

    Inertial confinement fusion capsule implosions absorbing up to 35 kJ of x-rays from a ∼220 eV dynamic hohlraum on the Z accelerator at Sandia National Laboratories have produced thermonuclear D-D neutron yields of (2.6 ± 1.3) x 10 10 . Argon spectra confirm a hot fuel with Te ∼ 1 keV and n e ∼ (1-2) x 10 23 cm -3 . Higher performance implosions will require radiation symmetry control improvements. Capsule implosions in a ∼70 eV double-Z-pinch-driven secondary hohlraum have been radiographed by 6.7 keV x-rays produced by the Z-beamlet laser (ZBL), demonstrating a drive symmetry of about 3% and control of P 2 radiation asymmetries to ±2%. Hemispherical capsule implosions have also been radiographed in Z in preparation for future experiments in fast ignition physics. Z-pinch-driven inertial fusion energy concepts are being developed. The refurbished Z machine (ZR) will begin providing scaling information on capsule and Z-pinch in 2006. The addition of a short pulse capability to ZBL will enable research into fast ignition physics in the combination of ZR and ZBL-petawatt. ZR could provide a test bed to study NIF-relevant double-shell ignition concepts using dynamic hohlraums and advanced symmetry control techniques in the double-pinch hohlraum backlit by ZBL.

  5. Predictors of Errors of Novice Java Programmers

    Science.gov (United States)

    Bringula, Rex P.; Manabat, Geecee Maybelline A.; Tolentino, Miguel Angelo A.; Torres, Edmon L.

    2012-01-01

    This descriptive study determined which of the sources of errors would predict the errors committed by novice Java programmers. Descriptive statistics revealed that the respondents perceived that they committed the identified eighteen errors infrequently. Thought error was perceived to be the main source of error during the laboratory programming…

  6. Localization and analysis of error sources for the numerical SIL proof; Lokalisierung und Analyse von Fehlerquellen beim numerischen SIL-Nachweis

    Energy Technology Data Exchange (ETDEWEB)

    Duepont, D.; Litz, L. [Technische Univ. Kaiserslautern (Germany). Lehrstuhl fuer Automatisierungstechnik; Netter, P. [Infraserv GmbH und Co. Hoechst KG, Frankfurt am Main (Germany)

    2008-07-01

    According to the standard IEC 61511 each safety-related loop is assigned to one of the four Safety Integrity Levels (SILs). For every safety-related loop a SIL-specific Probability of Failure on Demand (PFD) must be proven. Usually, the PFD calculation is performed based upon the failure rates of each loop component aided by commercial software tools. However, this bottom-up approach suffers from many uncertainties. Especially, a lack of reliable failure rate data causes many problems. Reference data collected in different environments are available to solve this situation. However, this pragmatism leads to a PFD bandwidth, not to a single PFD value as desired. In order to make a decision for a numerical value appropriate for the chemical and pharmaceutical process industry a data ascertainment has been initiated by the European NAMUR. Its results display large deficiencies for the bottom-up approach. The error sources leading to this situation are located and analyzed. (orig.)

  7. Bromide Sources and Loads in Swiss Surface Waters and Their Relevance for Bromate Formation during Wastewater Ozonation.

    Science.gov (United States)

    Soltermann, Fabian; Abegglen, Christian; Götz, Christian; von Gunten, Urs

    2016-09-20

    Bromide measurements and mass balances in the catchments of major Swiss rivers revealed that chemical industry and municipal waste incinerators are the most important bromide sources and account for ∼50% and ∼20%, respectively, of the ∼2000 tons of bromide discharged in the Rhine river in 2014 in Switzerland. About 100 wastewater treatment plants (WWTPs) will upgrade their treatment for micropollutant abatement in the future to comply with Swiss regulations. An upgrade with ozonation may lead to unintended bromate formation in bromide-containing wastewaters. Measured bromide concentrations were industry). Wastewater ozonation formed little bromate at specific ozone doses of ≤0.4 mg O3/mg DOC, while the bromate yields were almost linearly correlated to the specific ozone dose for higher ozone doses. Molar bromate yields for typical specific ozone doses in wastewater treatment (0.4-0.6 mg O3/mg DOC) are ≤3%. In a modeled extreme scenario (in which all upgraded WWTPs release 10 μg L(-1) of bromate), bromate concentrations increased by major Swiss rivers and by several micrograms per liter in receiving water bodies with a high fraction of municipal wastewater.

  8. Operator errors

    International Nuclear Information System (INIS)

    Knuefer; Lindauer

    1980-01-01

    Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)

  9. Error sources in the real-time NLDAS incident surface solar radiation and an evaluation against field observations and the NARR

    Science.gov (United States)

    Park, G.; Gao, X.; Sorooshian, S.

    2005-12-01

    The atmospheric model is sensitive to the land surface interactions and its coupling with Land surface Models (LSMs) leads to a better ability to forecast weather under extreme climate conditions, such as droughts and floods (Atlas et al. 1993; Beljaars et al. 1996). However, it is still questionable how accurately the surface exchanges can be simulated using LSMs, since terrestrial properties and processes have high variability and heterogeneity. Examinations with long-term and multi-site surface observations including both remotely sensed and ground observations are highly needed to make an objective evaluation on the effectiveness and uncertainty of LSMs at different circumstances. Among several atmospheric forcing required for the offline simulation of LSMs, incident surface solar radiation is one of the most significant components, since it plays a major role in total incoming energy into the land surface. The North American Land Data Assimilation System (NLDAS) and North American Regional Reanalysis (NARR) are two important data sources providing high-resolution surface solar radiation data for the use of research communities. In this study, these data are evaluated against field observations (AmeriFlux) to identify their advantages, deficiencies and sources of errors. The NLDAS incident solar radiation shows a pretty good agreement in monthly mean prior to the summer of 2001, while it overestimates after the summer of 2001 and its bias is pretty close to the EDAS. Two main error sources are identified: 1) GOES solar radiation was not used in the NLDAS for several months in 2001 and 2003, and 2) GOES incident solar radiation when available, was positively biased in year 2002. The known snow detection problem is sometimes identified in the NLDAS, since it is inherited from GOES incident solar radiation. The NARR consistently overestimates incident surface solar radiation, which might produce erroneous outputs if used in the LSMs. Further attention is given to

  10. Uncertainty quantification and error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  11. Carbohydrates from Sources with a Higher Glycemic Index during Adolescence: Is Evening Rather than Morning Intake Relevant for Risk Markers of Type 2 Diabetes in Young Adulthood?

    Science.gov (United States)

    Diederichs, Tanja; Herder, Christian; Roßbach, Sarah; Roden, Michael; Wudy, Stefan A; Nöthlings, Ute; Alexy, Ute; Buyken, Anette E

    2017-06-10

    This study investigated whether glycemic index (GI) or glycemic load (GL) of morning or evening intake and morning or evening carbohydrate intake from low- or higher-GI food sources (low-GI-CHO, higher-GI-CHO) during adolescence are relevant for risk markers of type 2 diabetes in young adulthood. Methods: Analyses included DOrtmund Nutritional and Anthropometric Longitudinally Designed (DONALD) study participants who had provided at least two 3-day weighed dietary records (median: 7 records) during adolescence and one blood sample in young adulthood. Using multivariable linear regression analyses, estimated morning and evening GI, GL, low-GI-CHO (GI adolescence were not associated with any of the adult risk markers. A higher evening GI during adolescence was related to an increased HSI in young adulthood ( p = 0.003). A higher consumption of higher-GI-CHO in the evening was associated with lower insulin sensitivity ( p = 0.046) and an increased HSI ( p = 0.006), while a higher evening intake of low-GI-CHO was related to a lower HSI ( p = 0.009). Evening intakes were not related to FLI or the pro-inflammatory-score (all p > 0.1). Conclusion: Avoidance of large amounts of carbohydrates from higher-GI sources in the evening should be considered in preventive strategies to reduce the risk of type 2 diabetes in adulthood.

  12. Field errors in hybrid insertion devices

    International Nuclear Information System (INIS)

    Schlueter, R.D.

    1995-02-01

    Hybrid magnet theory as applied to the error analyses used in the design of Advanced Light Source (ALS) insertion devices is reviewed. Sources of field errors in hybrid insertion devices are discussed

  13. Field errors in hybrid insertion devices

    Energy Technology Data Exchange (ETDEWEB)

    Schlueter, R.D. [Lawrence Berkeley Lab., CA (United States)

    1995-02-01

    Hybrid magnet theory as applied to the error analyses used in the design of Advanced Light Source (ALS) insertion devices is reviewed. Sources of field errors in hybrid insertion devices are discussed.

  14. Dual Processing and Diagnostic Errors

    Science.gov (United States)

    Norman, Geoff

    2009-01-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…

  15. Carbohydrates from Sources with a Higher Glycemic Index during Adolescence: Is Evening Rather than Morning Intake Relevant for Risk Markers of Type 2 Diabetes in Young Adulthood?

    Directory of Open Access Journals (Sweden)

    Tanja Diederichs

    2017-06-01

    Full Text Available Background: This study investigated whether glycemic index (GI or glycemic load (GL of morning or evening intake and morning or evening carbohydrate intake from low- or higher-GI food sources (low-GI-CHO, higher-GI-CHO during adolescence are relevant for risk markers of type 2 diabetes in young adulthood. Methods: Analyses included DOrtmund Nutritional and Anthropometric Longitudinally Designed (DONALD study participants who had provided at least two 3-day weighed dietary records (median: 7 records during adolescence and one blood sample in young adulthood. Using multivariable linear regression analyses, estimated morning and evening GI, GL, low-GI-CHO (GI < 55 and higher-GI-CHO (GI ≥ 55 were related to insulin sensitivity (N = 252, hepatic steatosis index (HSI, fatty liver index (FLI (both N = 253, and a pro-inflammatory-score (N = 249. Results: Morning intakes during adolescence were not associated with any of the adult risk markers. A higher evening GI during adolescence was related to an increased HSI in young adulthood (p = 0.003. A higher consumption of higher-GI-CHO in the evening was associated with lower insulin sensitivity (p = 0.046 and an increased HSI (p = 0.006, while a higher evening intake of low-GI-CHO was related to a lower HSI (p = 0.009. Evening intakes were not related to FLI or the pro-inflammatory-score (all p > 0.1. Conclusion: Avoidance of large amounts of carbohydrates from higher-GI sources in the evening should be considered in preventive strategies to reduce the risk of type 2 diabetes in adulthood.

  16. E-SovTox: An online database of the main publicly-available sources of toxicity data concerning REACH-relevant chemicals published in the Russian language.

    Science.gov (United States)

    Sihtmäe, Mariliis; Blinova, Irina; Aruoja, Villem; Dubourguier, Henri-Charles; Legrand, Nicolas; Kahru, Anne

    2010-08-01

    A new open-access online database, E-SovTox, is presented. E-SovTox provides toxicological data for substances relevant to the EU Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) system, from publicly-available Russian language data sources. The database contains information selected mainly from scientific journals published during the Soviet Union era. The main information source for this database - the journal, Gigiena Truda i Professional'nye Zabolevania [Industrial Hygiene and Occupational Diseases], published between 1957 and 1992 - features acute, but also chronic, toxicity data for numerous industrial chemicals, e.g. for rats, mice, guinea-pigs and rabbits. The main goal of the abovementioned toxicity studies was to derive the maximum allowable concentration limits for industrial chemicals in the occupational health settings of the former Soviet Union. Thus, articles featured in the database include mostly data on LD50 values, skin and eye irritation, skin sensitisation and cumulative properties. Currently, the E-SovTox database contains toxicity data selected from more than 500 papers covering more than 600 chemicals. The user is provided with the main toxicity information, as well as abstracts of these papers in Russian and in English (given as provided in the original publication). The search engine allows cross-searching of the database by the name or CAS number of the compound, and the author of the paper. The E-SovTox database can be used as a decision-support tool by researchers and regulators for the hazard assessment of chemical substances. 2010 FRAME.

  17. Relevance analysis and short-term prediction of PM2.5 concentrations in Beijing based on multi-source data

    Science.gov (United States)

    Ni, X. Y.; Huang, H.; Du, W. P.

    2017-02-01

    The PM2.5 problem is proving to be a major public crisis and is of great public-concern requiring an urgent response. Information about, and prediction of PM2.5 from the perspective of atmospheric dynamic theory is still limited due to the complexity of the formation and development of PM2.5. In this paper, we attempted to realize the relevance analysis and short-term prediction of PM2.5 concentrations in Beijing, China, using multi-source data mining. A correlation analysis model of PM2.5 to physical data (meteorological data, including regional average rainfall, daily mean temperature, average relative humidity, average wind speed, maximum wind speed, and other pollutant concentration data, including CO, NO2, SO2, PM10) and social media data (microblog data) was proposed, based on the Multivariate Statistical Analysis method. The study found that during these factors, the value of average wind speed, the concentrations of CO, NO2, PM10, and the daily number of microblog entries with key words 'Beijing; Air pollution' show high mathematical correlation with PM2.5 concentrations. The correlation analysis was further studied based on a big data's machine learning model- Back Propagation Neural Network (hereinafter referred to as BPNN) model. It was found that the BPNN method performs better in correlation mining. Finally, an Autoregressive Integrated Moving Average (hereinafter referred to as ARIMA) Time Series model was applied in this paper to explore the prediction of PM2.5 in the short-term time series. The predicted results were in good agreement with the observed data. This study is useful for helping realize real-time monitoring, analysis and pre-warning of PM2.5 and it also helps to broaden the application of big data and the multi-source data mining methods.

  18. Quantification and handling of sampling errors in instrumental measurements: a case study

    DEFF Research Database (Denmark)

    Andersen, Charlotte Møller; Bro, R.

    2004-01-01

    in certain situations, the effect of systematic errors is also considerable. The relevant errors contributing to the prediction error are: error in instrumental measurements (x-error), error in reference measurements (y-error), error in the estimated calibration model (regression coefficient error) and model...

  19. Clock error models for simulation and estimation

    International Nuclear Information System (INIS)

    Meditch, J.S.

    1981-10-01

    Mathematical models for the simulation and estimation of errors in precision oscillators used as time references in satellite navigation systems are developed. The results, based on all currently known oscillator error sources, are directly implementable on a digital computer. The simulation formulation is sufficiently flexible to allow for the inclusion or exclusion of individual error sources as desired. The estimation algorithms, following from Kalman filter theory, provide directly for the error analysis of clock errors in both filtering and prediction

  20. Source of errors and accuracy of a two-dimensional/three-dimensional fusion road map for endovascular aneurysm repair of abdominal aortic aneurysm.

    Science.gov (United States)

    Kauffmann, Claude; Douane, Frédéric; Therasse, Eric; Lessard, Simon; Elkouri, Stephane; Gilbert, Patrick; Beaudoin, Nathalie; Pfister, Marcus; Blair, Jean François; Soulez, Gilles

    2015-04-01

    To evaluate the accuracy and source of errors using a two-dimensional (2D)/three-dimensional (3D) fusion road map for endovascular aneurysm repair (EVAR) of abdominal aortic aneurysm. A rigid 2D/3D road map was tested in 16 patients undergoing EVAR. After 3D/3D manual registration of preoperative multidetector computed tomography (CT) and cone beam CT, abdominal aortic aneurysm outlines were overlaid on live fluoroscopy/digital subtraction angiography (DSA). Patient motion was evaluated using bone landmarks. The misregistration of renal and internal iliac arteries were estimated by 3 readers along head-feet and right-left coordinates (z-axis and x-axis, respectively) before and after bone and DSA corrections centered on the lowest renal artery. Iliac deformation was evaluated by comparing centerlines before and during intervention. A score of clinical added value was estimated as high (z-axis 5 mm). Interobserver reproducibility was calculated by the intraclass correlation coefficient. The lowest renal artery misregistration was estimated at x-axis = 10.6 mm ± 11.1 and z-axis = 7.4 mm ± 5.3 before correction and at x-axis = 3.5 mm ± 2.5 and z-axis = 4.6 mm ± 3.7 after bone correction (P = .08), and at 0 after DSA correction (P artery was estimated at x-axis = 2.4 mm ± 2.0 and z-axis = 2.2 mm ± 2.0. Score of clinical added value was low (n = 11), good (n= 0), and high (n= 5) before correction and low (n = 5), good (n = 4), and high (n = 7) after bone correction. Interobserver intraclass correlation coefficient for misregistration measurements was estimated at 0.99. Patient motion before stent graft delivery was estimated at x-axis = 8 mm ± 5.8 and z-axis = 3.0 mm ± 2.7. The internal iliac artery misregistration measurements were estimated at x-axis = 6.1 mm ± 3.5 and z-axis = 5.6 mm ± 4.0, and iliac centerline deformation was estimated at 38.3 mm ± 15.6. Rigid registration is feasible and fairly accurate. Only a partial reduction of vascular

  1. Sources

    International Nuclear Information System (INIS)

    Duffy, L.P.

    1991-01-01

    This paper discusses the sources of radiation in the narrow perspective of radioactivity and the even narrow perspective of those sources that concern environmental management and restoration activities at DOE facilities, as well as a few related sources. Sources of irritation, Sources of inflammatory jingoism, and Sources of information. First, the sources of irritation fall into three categories: No reliable scientific ombudsman to speak without bias and prejudice for the public good, Technical jargon with unclear definitions exists within the radioactive nomenclature, and Scientific community keeps a low-profile with regard to public information. The next area of personal concern are the sources of inflammation. This include such things as: Plutonium being described as the most dangerous substance known to man, The amount of plutonium required to make a bomb, Talk of transuranic waste containing plutonium and its health affects, TMI-2 and Chernobyl being described as Siamese twins, Inadequate information on low-level disposal sites and current regulatory requirements under 10 CFR 61, Enhanced engineered waste disposal not being presented to the public accurately. Numerous sources of disinformation regarding low level radiation high-level radiation, Elusive nature of the scientific community, The Federal and State Health Agencies resources to address comparative risk, and Regulatory agencies speaking out without the support of the scientific community

  2. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Sources of nitrous oxide and other climate relevant gases on surface area in a dairy free stall barn with solid floor and outside slurry storage

    Science.gov (United States)

    Schmithausen, Alexander J.; Trimborn, Manfred; Büscher, Wolfgang

    2018-04-01

    Livestock production systems in agriculture are one of the major emitters of greenhouse gases. So far, the focus of research in the dairy farm sector was primarily on ruminal methane (CH4) emissions. Emissions of nitrous oxide (N2O) usually arise from solid manure or in deep litter free stall barns. Release of N2O occurs as a result of interactions between organic material, nitrogen and moisture. Data of N2O emissions from modern dairy barns and liquid manure management systems are rare. Thus, the goal of this research was to determine the main sources of trace gas emissions at the dairy farm level, including N2O. Areas such as the scraped surface area where dry and wet conditions alternate are interesting. Possible sources of trace gases within and outside the barn were localised by measuring trace gas concentration rates from different dairy farm areas (e.g., areas covered with urine and excrement or slurry storage system) via the closed chamber technique. The results indicate typical emission ratios of carbon dioxide (CO2), CH4 and N2O in the various areas to generate comparable equivalent values. Calculated on the basis of nitrogen excretion from dairy cows, total emissions of N2O were much lower from barns than typically measured in fields. However, there were also areas within the barn with individual events and unexpected release factors of N2O concentrations such as urine patches, polluted areas and cubicles. Emission factors of N2O ranged from 1.1 to 5.0 mg m-2 d-1, respectively, for cleaned areas and urine patches. By considering the release factors of these areas and their proportion of the entire barn, total emission rates of 371 CO2-eq. LU-1 a-1, 36 CO2-eq. LU-1 a-1, and 1.7 kg CO2-eq. LU-1 a-1 for CO2, CH4 and N2O, respectively, were measured for the whole barn surface area. The CH4 emissions from surface area were stronger climate relevant comparing to N2O emissions, but compared to CH4 emissions from slurry storage or ruminal fermentation (not

  4. An assessment of the evaporation and condensation phenomena of lithium during the operation of a Li(d,xn fusion relevant neutron source

    Directory of Open Access Journals (Sweden)

    J. Knaster

    2016-12-01

    Full Text Available The flowing lithium target of a Li(d,xn fusion relevant neutron source must evacuate the deuteron beam power and generate in a stable manner a flux of neutrons with a broad peak at 14 MeV capable to cause similar phenomena as would undergo the structural materials of plasma facing components of a DEMO like reactors. Whereas the physics of the beam-target interaction are understood and the stability of the lithium screen flowing at the nominal conditions of IFMIF (25 mm thick screen with +/–1 mm surface amplitudes flowing at 15 m/s and 523 K has been demonstrated, a conclusive assessment of the evaporation and condensation of lithium during operation was missing. First attempts to determine evaporation rates started by Hertz in 1882 and have since been subject of continuous efforts driven by its practical importance; however intense surface evaporation is essentially a non-equilibrium process with its inherent theoretical difficulties. Hertz-Knudsen-Langmuir (HKL equation with Schrage’s ‘accommodation factor’ η = 1.66 provide excellent agreement with experiments for weak evaporation under certain conditions, which are present during a Li(d,xn facility operation. An assessment of the impact under the known operational conditions for IFMIF (574 K and 10−3Pa on the free surface, with the sticking probability of 1 inherent to a hot lithium gas contained in room temperature steel walls, is carried out. An explanation of the main physical concepts to adequately place needed assumptions is included.

  5. Identifying afterloading PDR and HDR brachytherapy errors using real-time fiber-coupled Al2O3:C dosimetry and a novel statistical error decision criterion

    International Nuclear Information System (INIS)

    Kertzscher, Gustavo; Andersen, Claus E.; Siebert, Frank-Andre; Nielsen, Soren Kynde; Lindegaard, Jacob C.; Tanderup, Kari

    2011-01-01

    Background and purpose: The feasibility of a real-time in vivo dosimeter to detect errors has previously been demonstrated. The purpose of this study was to: (1) quantify the sensitivity of the dosimeter to detect imposed treatment errors under well controlled and clinically relevant experimental conditions, and (2) test a new statistical error decision concept based on full uncertainty analysis. Materials and methods: Phantom studies of two gynecological cancer PDR and one prostate cancer HDR patient treatment plans were performed using tandem ring applicators or interstitial needles. Imposed treatment errors, including interchanged pairs of afterloader guide tubes and 2-20 mm source displacements, were monitored using a real-time fiber-coupled carbon doped aluminum oxide (Al 2 O 3 :C) crystal dosimeter that was positioned in the reconstructed tumor region. The error detection capacity was evaluated at three dose levels: dwell position, source channel, and fraction. The error criterion incorporated the correlated source position uncertainties and other sources of uncertainty, and it was applied both for the specific phantom patient plans and for a general case (source-detector distance 5-90 mm and position uncertainty 1-4 mm). Results: Out of 20 interchanged guide tube errors, time-resolved analysis identified 17 while fraction level analysis identified two. Channel and fraction level comparisons could leave 10 mm dosimeter displacement errors unidentified. Dwell position dose rate comparisons correctly identified displacements ≥5 mm. Conclusion: This phantom study demonstrates that Al 2 O 3 :C real-time dosimetry can identify applicator displacements ≥5 mm and interchanged guide tube errors during PDR and HDR brachytherapy. The study demonstrates the shortcoming of a constant error criterion and the advantage of a statistical error criterion.

  6. Random Measurement Error as a Source of Discrepancies between the Reports of Wives and Husbands Concerning Marital Power and Task Allocation.

    Science.gov (United States)

    Quarm, Daisy

    1981-01-01

    Findings for couples (N=119) show wife's work, money, and spare time low between-spouse correlations are due in part to random measurement error. Suggests that increasing reliability of measures by creating multi-item indices can also increase correlations. Car purchase, vacation, and child discipline were not accounted for by random measurement…

  7. sources

    Directory of Open Access Journals (Sweden)

    Shu-Yin Chiang

    2002-01-01

    Full Text Available In this paper, we study the simplified models of the ATM (Asynchronous Transfer Mode multiplexer network with Bernoulli random traffic sources. Based on the model, the performance measures are analyzed by the different output service schemes.

  8. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  9. Error Budgeting

    Energy Technology Data Exchange (ETDEWEB)

    Vinyard, Natalia Sergeevna [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Perry, Theodore Sonne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Usov, Igor Olegovich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-10-04

    We calculate opacity from k (hn)=-ln[T(hv)]/pL, where T(hv) is the transmission for photon energy hv, p is sample density, and L is path length through the sample. The density and path length are measured together by Rutherford backscatter. Δk = $\\partial k$\\ $\\partial T$ ΔT + $\\partial k$\\ $\\partial (pL)$. We can re-write this in terms of fractional error as Δk/k = Δ1n(T)/T + Δ(pL)/(pL). Transmission itself is calculated from T=(U-E)/(V-E)=B/B0, where B is transmitted backlighter (BL) signal and B0 is unattenuated backlighter signal. Then ΔT/T=Δln(T)=ΔB/B+ΔB0/B0, and consequently Δk/k = 1/T (ΔB/B + ΔB$_0$/B$_0$ + Δ(pL)/(pL). Transmission is measured in the range of 0.2

  10. Artificial Intelligence and Second Language Learning: An Efficient Approach to Error Remediation

    Science.gov (United States)

    Dodigovic, Marina

    2007-01-01

    While theoretical approaches to error correction vary in the second language acquisition (SLA) literature, most sources agree that such correction is useful and leads to learning. While some point out the relevance of the communicative context in which the correction takes place, others stress the value of consciousness-raising. Trying to…

  11. Error and its meaning in forensic science.

    Science.gov (United States)

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes. © 2013 American Academy of Forensic Sciences.

  12. Common characterization of variability and forecast errors of variable energy sources and their mitigation using reserves in power system integration studies

    Energy Technology Data Exchange (ETDEWEB)

    Menemenlis, N.; Huneault, M. [IREQ, Varennes, QC (Canada); Robitaille, A. [Dir. Plantif. de la Production Eolienne, Montreal, QC (Canada). HQ Production; Holttinen, H. [VTT Technical Research Centre of Finland, VTT (Finland)

    2012-07-01

    This In this paper we define and characterize the two random variables, variability and forecast error, over which uncertainty in power systems operations is characterized and mitigated. We show that the characterization of both these variables can be carried out with the same mathematical tools. Furthermore, this common characterization of random variables lends itself to a common methodology for the calculation of non-contingency reserves required to mitigate their effects. A parallel comparison of these two variables demonstrates similar inherent statistical properties. They depend on imminent conditions, evolve with time and can be asymmetric. Correlation is an important factor when aggregating individual wind farm characteristics in forming the distribution of the total wind generation for imminent conditions. (orig.)

  13. A response matrix method for one-speed discrete ordinates fixed source problems in slab geometry with no spatial truncation error

    International Nuclear Information System (INIS)

    Lydia, Emilio J.; Barros, Ricardo C.

    2011-01-01

    In this paper we describe a response matrix method for one-speed slab-geometry discrete ordinates (SN) neutral particle transport problems that is completely free from spatial truncation errors. The unknowns in the method are the cell-edge angular fluxes of particles. The numerical results generated for these quantities are exactly those obtained from the analytic solution of the SN problem apart from finite arithmetic considerations. Our method is based on a spectral analysis that we perform in the SN equations with scattering inside a discretization cell of the spatial grid set up on the slab. As a result of this spectral analysis, we are able to obtain an expression for the local general solution of the SN equations. With this local general solution, we determine the response matrix and use the prescribed boundary conditions and continuity conditions to sweep across the discretization cells from left to right and from right to left across the slab, until a prescribed convergence criterion is satisfied. (author)

  14. Average beta-beating from random errors

    CERN Document Server

    Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department

    2018-01-01

    The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic effect on the tune.

  15. Quantifying sources of bias in longitudinal data linkage studies of child abuse and neglect: measuring impact of outcome specification, linkage error, and partial cohort follow-up.

    Science.gov (United States)

    Parrish, Jared W; Shanahan, Meghan E; Schnitzer, Patricia G; Lanier, Paul; Daniels, Julie L; Marshall, Stephen W

    2017-12-01

    Health informatics projects combining statewide birth populations with child welfare records have emerged as a valuable approach to conducting longitudinal research of child maltreatment. The potential bias resulting from linkage misspecification, partial cohort follow-up, and outcome misclassification in these studies has been largely unexplored. This study integrated epidemiological survey and novel administrative data sources to establish the Alaska Longitudinal Child Abuse and Neglect Linkage (ALCANLink) project. Using these data we evaluated and quantified the impact of non-linkage misspecification and single source maltreatment ascertainment use on reported maltreatment risk and effect estimates. The ALCANLink project integrates the 2009-2011 Alaska Pregnancy Risk Assessment Monitoring System (PRAMS) sample with multiple administrative databases through 2014, including one novel administrative source to track out-of-state emigration. For this project we limited our analysis to the 2009 PRAMS sample. We report on the impact of linkage quality, cohort follow-up, and multisource outcome ascertainment on the incidence proportion of reported maltreatment before age 6 and hazard ratios of selected characteristics that are often available in birth cohort linkage studies of maltreatment. Failure to account for out-of-state emigration biased the incidence proportion by 12% (from 28.3% w to 25.2% w ), and the hazard ratio (HR) by as much as 33% for some risk factors. Overly restrictive linkage parameters biased the incidence proportion downwards by 43% and the HR by as much as 27% for some factors. Multi-source linkages, on the other hand, were of little benefit for improving reported maltreatment ascertainment. Using the ALCANLink data which included a novel administrative data source, we were able to observe and quantify bias to both the incidence proportion and HR in a birth cohort linkage study of reported child maltreatment. Failure to account for out

  16. Dual processing and diagnostic errors.

    Science.gov (United States)

    Norman, Geoff

    2009-09-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical, conscious, and conceptual process, called System 2. Exemplar theories of categorization propose that many category decisions in everyday life are made by unconscious matching to a particular example in memory, and these remain available and retrievable individually. I then review studies of clinical reasoning based on these theories, and show that the two processes are equally effective; System 1, despite its reliance in idiosyncratic, individual experience, is no more prone to cognitive bias or diagnostic error than System 2. Further, I review evidence that instructions directed at encouraging the clinician to explicitly use both strategies can lead to consistent reduction in error rates.

  17. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  18. Understanding human management of automation errors

    Science.gov (United States)

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042

  19. The effect of subject measurement error on joint kinematics in the conventional gait model: Insights from the open-source pyCGM tool using high performance computing methods.

    Science.gov (United States)

    Schwartz, Mathew; Dixon, Philippe C

    2018-01-01

    The conventional gait model (CGM) is a widely used biomechanical model which has been validated over many years. The CGM relies on retro-reflective markers placed along anatomical landmarks, a static calibration pose, and subject measurements as inputs for joint angle calculations. While past literature has shown the possible errors caused by improper marker placement, studies on the effects of inaccurate subject measurements are lacking. Moreover, as many laboratories rely on the commercial version of the CGM, released as the Plug-in Gait (Vicon Motion Systems Ltd, Oxford, UK), integrating improvements into the CGM code is not easily accomplished. This paper introduces a Python implementation for the CGM, referred to as pyCGM, which is an open-source, easily modifiable, cross platform, and high performance computational implementation. The aims of pyCGM are to (1) reproduce joint kinematic outputs from the Vicon CGM and (2) be implemented in a parallel approach to allow integration on a high performance computer. The aims of this paper are to (1) demonstrate that pyCGM can systematically and efficiently examine the effect of subject measurements on joint angles and (2) be updated to include new calculation methods suggested in the literature. The results show that the calculated joint angles from pyCGM agree with Vicon CGM outputs, with a maximum lower body joint angle difference of less than 10-5 degrees. Through the hierarchical system, the ankle joint is the most vulnerable to subject measurement error. Leg length has the greatest effect on all joints as a percentage of measurement error. When compared to the errors previously found through inter-laboratory measurements, the impact of subject measurements is minimal, and researchers should rather focus on marker placement. Finally, we showed that code modifications can be performed to include improved hip, knee, and ankle joint centre estimations suggested in the existing literature. The pyCGM code is provided

  20. [Responsibility due to medication errors in France: a study based on SHAM insurance data].

    Science.gov (United States)

    Theissen, A; Orban, J-C; Fuz, F; Guerin, J-P; Flavin, P; Albertini, S; Maricic, S; Saquet, D; Niccolai, P

    2015-03-01

    The safe medication practices at the hospital constitute a major public health problem. Drug supply chain is a complex process, potentially source of errors and damages for the patient. SHAM insurances are the biggest French provider of medical liability insurances and a relevant source of data on the health care complications. The main objective of the study was to analyze the type and cause of medication errors declared to SHAM and having led to a conviction by a court. We did a retrospective study on insurance claims provided by SHAM insurances with a medication error and leading to a condemnation over a 6-year period (between 2005 and 2010). Thirty-one cases were analysed, 21 for scheduled activity and 10 for emergency activity. Consequences of claims were mostly serious (12 deaths, 14 serious complications, 5 simple complications). The types of medication errors were a drug monitoring error (11 cases), an administration error (5 cases), an overdose (6 cases), an allergy (4 cases), a contraindication (3 cases) and an omission (2 cases). Intravenous route of administration was involved in 19 of 31 cases (61%). The causes identified by the court expert were an error related to service organization (11), an error related to medical practice (11) or nursing practice (13). Only one claim was due to the hospital pharmacy. The claim related to drug supply chain is infrequent but potentially serious. These data should help strengthen quality approach in risk management. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  1. The effect of venous pulsation on the forehead pulse oximeter wave form as a possible source of error in Spo2 calculation.

    Science.gov (United States)

    Shelley, Kirk H; Tamai, Doris; Jablonka, Denis; Gesquiere, Michael; Stout, Robert G; Silverman, David G

    2005-03-01

    Reflective forehead pulse oximeter sensors have recently been introduced into clinical practice. They reportedly have the advantage of faster response times and immunity to the effects of vasoconstriction. Of concern are reports of signal instability and erroneously low Spo(2) values with some of these new sensors. During a study of the plethysmographic wave forms from various sites (finger, ear, and forehead) it was noted that in some cases the forehead wave form became unexpectedly complex in configuration. The plethysmographic signals from 25 general anesthetic cases were obtained, which revealed the complex forehead wave form during 5 cases. We hypothesized that the complex wave form was attributable to an underlying venous signal. It was determined that the use of a pressure dressing over the sensor resulted in a return of a normal plethysmographic wave form. Further examination of the complex forehead wave form reveal a morphology consistent with a central venous trace with atrial, cuspidal, and venous waves. It is speculated that the presence of the venous signal is the source of the problems reported with the forehead sensors. It is believed that the venous wave form is a result of the method of attachment rather than the use of reflective plethysmographic sensors.

  2. Gas-phase naphthalene concentration data recovery in ambient air and its relevance as a tracer of sources of volatile organic compounds

    Science.gov (United States)

    Uria-Tellaetxe, Iratxe; Navazo, Marino; de Blas, Maite; Durana, Nieves; Alonso, Lucio; Iza, Jon

    2016-04-01

    Despite the toxicity of naphthalene and the fact that it is a precursor of atmospheric photooxidants and secondary aerosol, studies on ambient gas-phase naphthalene are generally scarce. Moreover, as far as we are concerned, this is the first published one using long-term hourly ambient gas-phase naphthalene concentrations. In this work, it has been also demonstrated the usefulness of ambient gas-phase naphthalene to identify major sources of volatile organic compounds (VOC) in complex scenarios. Initially, in order to identify main benzene emission sources, hourly ambient measurements of 60 VOC were taken during a complete year together with meteorological data in an urban/industrial area. Later, due to the observed co-linearity of some of the emissions, a procedure was developed to recover naphthalene concentration data from recorded chromatograms to use it as a tracer of the combustion and distillation of petroleum products. The characteristic retention time of this compound was determined comparing previous GC-MS and GC-FID simultaneous analysis by means of relative retention times, and its concentration was calculated by using relative response factors. The obtained naphthalene concentrations correlated fairly well with ethene (r = 0.86) and benzene (r = 0.92). Besides, the analysis of daily time series showed that these compounds followed a similar pattern, very different from that of other VOC, with minimum concentrations at day-time. This, together with the results from the assessment of the meteorological dependence pointed out a coke oven as the major naphthalene and benzene emitting sources in the study area.

  3. Dynamics of a Z-pinch x-ray source for heating inertial-confinement-fusion relevant hohlraums to 120--160 eV

    Energy Technology Data Exchange (ETDEWEB)

    Sanford, T. W. L.; Olson, R. E.; Mock, R. C.; Chandler, G. A.; Leeper, R. J.; Nash, T. J.; Ruggles, L. E.; Simpson, W. W.; Struve, K. W.; Peterson, D. L. (and others)

    2000-11-01

    A Z-pinch radiation source has been developed that generates 60{+-}20 kJ of x rays with a peak power of 13{+-}4 TW through a 4-mm-diam axial aperture on the Z facility. The source has heated National Ignition Facility-scale (6-mm-diam by 7-mm-high) hohlraums to 122{+-}6 eV and reduced-scale (4-mm-diam by 4-mm-high) hohlraums to 155{+-}8 eV -- providing environments suitable for indirect-drive inertial confinement fusion studies. Eulerian-RMHC (radiation-magnetohydrodynamics code) simulations that take into account the development of the Rayleigh--Taylor instability in the r--z plane provide integrated calculations of the implosion, x-ray generation, and hohlraum heating, as well as estimates of wall motion and plasma fill within the hohlraums. Lagrangian-RMHC simulations suggest that the addition of a 6 mg/cm3 CH{sub 2} fill in the reduced-scale hohlraum decreases hohlraum inner-wall velocity by {approx}40% with only a 3%--5% decrease in peak temperature, in agreement with measurements.

  4. Dynamics of a Z-pinch x-ray source for heating inertial-confinement-fusion relevant hohlraums to 120--160 eV

    International Nuclear Information System (INIS)

    Sanford, T. W. L.; Olson, R. E.; Mock, R. C.; Chandler, G. A.; Leeper, R. J.; Nash, T. J.; Ruggles, L. E.; Simpson, W. W.; Struve, K. W.; Peterson, D. L.

    2000-01-01

    A Z-pinch radiation source has been developed that generates 60±20 kJ of x rays with a peak power of 13±4 TW through a 4-mm-diam axial aperture on the Z facility. The source has heated National Ignition Facility-scale (6-mm-diam by 7-mm-high) hohlraums to 122±6 eV and reduced-scale (4-mm-diam by 4-mm-high) hohlraums to 155±8 eV -- providing environments suitable for indirect-drive inertial confinement fusion studies. Eulerian-RMHC (radiation-magnetohydrodynamics code) simulations that take into account the development of the Rayleigh--Taylor instability in the r--z plane provide integrated calculations of the implosion, x-ray generation, and hohlraum heating, as well as estimates of wall motion and plasma fill within the hohlraums. Lagrangian-RMHC simulations suggest that the addition of a 6 mg/cm3 CH 2 fill in the reduced-scale hohlraum decreases hohlraum inner-wall velocity by ∼40% with only a 3%--5% decrease in peak temperature, in agreement with measurements

  5. Dynamics of a Z-pinch x-ray source for heating inertial-confinement-fusion relevant hohlraums to 120-160 eV

    Science.gov (United States)

    Sanford, T. W. L.; Olson, R. E.; Mock, R. C.; Chandler, G. A.; Leeper, R. J.; Nash, T. J.; Ruggles, L. E.; Simpson, W. W.; Struve, K. W.; Peterson, D. L.; Bowers, R. L.; Matuska, W.

    2000-11-01

    A Z-pinch radiation source has been developed that generates 60±20 kJ of x rays with a peak power of 13±4 TW through a 4-mm-diam axial aperture on the Z facility. The source has heated National Ignition Facility-scale (6-mm-diam by 7-mm-high) hohlraums to 122±6 eV and reduced-scale (4-mm-diam by 4-mm-high) hohlraums to 155±8 eV—providing environments suitable for indirect-drive inertial confinement fusion studies. Eulerian-RMHC (radiation-magnetohydrodynamics code) simulations that take into account the development of the Rayleigh-Taylor instability in the r-z plane provide integrated calculations of the implosion, x-ray generation, and hohlraum heating, as well as estimates of wall motion and plasma fill within the hohlraums. Lagrangian-RMHC simulations suggest that the addition of a 6 mg/cm3 CH2 fill in the reduced-scale hohlraum decreases hohlraum inner-wall velocity by ˜40% with only a 3%-5% decrease in peak temperature, in agreement with measurements.

  6. Radon, methane, carbon dioxide, oil seeps and potentially harmful elements from natural sources and mining area: relevance to planning and development in Great Britain. Summary report

    International Nuclear Information System (INIS)

    Appleton, J.D.

    1995-01-01

    Contaminated land is a major environmental issue in Great Britain mainly due to increased awareness and the change in public attitudes, but also due to pressures of UK and EC environmental legislation and directives. Government policy with respect to contaminated land is to deal with actual threats to health on a risk-based approach taking into account the use and environmental setting of the land; and to bring contaminated land back into beneficial use as far as practicable, and taking into account the principles of sustainability. The government has been concerned primarily with land which is being or has been put to potentially contaminative uses. However, some potentially harmful substances occur naturally and this review is concerned principally with three groups of 'natural' contaminants from geological sources: natural radioactivity, including radon, background radioactivity, and radioactive waters, derived mainly from uranium minerals and their weathering products in rocks and soils; methane, carbon dioxide and oil derived from coal bearing rocks, hydrocarbon source rocks, peat and other natural accumulations of organic matter; and potentially harmful chemical elements (PHEs), including arsenic, cadmium, chromium, copper, fluorine, lead, mercury, nickel, and zinc, derived from naturally occurring rocks and minerals. (author)

  7. Learning from prescribing errors

    OpenAIRE

    Dean, B

    2002-01-01

    

 The importance of learning from medical error has recently received increasing emphasis. This paper focuses on prescribing errors and argues that, while learning from prescribing errors is a laudable goal, there are currently barriers that can prevent this occurring. Learning from errors can take place on an individual level, at a team level, and across an organisation. Barriers to learning from prescribing errors include the non-discovery of many prescribing errors, lack of feedback to th...

  8. Quantum error correction for beginners

    International Nuclear Information System (INIS)

    Devitt, Simon J; Nemoto, Kae; Munro, William J

    2013-01-01

    Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future. (review article)

  9. Medical Error and Moral Luck.

    Science.gov (United States)

    Hubbeling, Dieneke

    2016-09-01

    This paper addresses the concept of moral luck. Moral luck is discussed in the context of medical error, especially an error of omission that occurs frequently, but only rarely has adverse consequences. As an example, a failure to compare the label on a syringe with the drug chart results in the wrong medication being administered and the patient dies. However, this error may have previously occurred many times with no tragic consequences. Discussions on moral luck can highlight conflicting intuitions. Should perpetrators receive a harsher punishment because of an adverse outcome, or should they be dealt with in the same way as colleagues who have acted similarly, but with no adverse effects? An additional element to the discussion, specifically with medical errors, is that according to the evidence currently available, punishing individual practitioners does not seem to be effective in preventing future errors. The following discussion, using relevant philosophical and empirical evidence, posits a possible solution for the moral luck conundrum in the context of medical error: namely, making a distinction between the duty to make amends and assigning blame. Blame should be assigned on the basis of actual behavior, while the duty to make amends is dependent on the outcome.

  10. Analysis of field errors in existing undulators

    International Nuclear Information System (INIS)

    Kincaid, B.M.

    1990-01-01

    The Advanced Light Source (ALS) and other third generation synchrotron light sources have been designed for optimum performance with undulator insertion devices. The performance requirements for these new undulators are explored, with emphasis on the effects of errors on source spectral brightness. Analysis of magnetic field data for several existing hybrid undulators is presented, decomposing errors into systematic and random components. An attempts is made to identify the sources of these errors, and recommendations are made for designing future insertion devices. 12 refs., 16 figs

  11. Finding errors in big data

    NARCIS (Netherlands)

    Puts, Marco; Daas, Piet; de Waal, A.G.

    No data source is perfect. Mistakes inevitably creep in. Spotting errors is hard enough when dealing with survey responses from several thousand people, but the difficulty is multiplied hugely when that mysterious beast Big Data comes into play. Statistics Netherlands is about to publish its first

  12. Comparison between calorimeter and HLNC errors

    International Nuclear Information System (INIS)

    Goldman, A.S.; De Ridder, P.; Laszlo, G.

    1991-01-01

    This paper summarizes an error analysis that compares systematic and random errors of total plutonium mass estimated for high-level neutron coincidence counter (HLNC) and calorimeter measurements. This task was part of an International Atomic Energy Agency (IAEA) study on the comparison of the two instruments to determine if HLNC measurement errors met IAEA standards and if the calorimeter gave ''significantly'' better precision. Our analysis was based on propagation of error models that contained all known sources of errors including uncertainties associated with plutonium isotopic measurements. 5 refs., 2 tabs

  13. An adaptive orienting theory of error processing.

    Science.gov (United States)

    Wessel, Jan R

    2018-03-01

    The ability to detect and correct action errors is paramount to safe and efficient goal-directed behaviors. Existing work on the neural underpinnings of error processing and post-error behavioral adaptations has led to the development of several mechanistic theories of error processing. These theories can be roughly grouped into adaptive and maladaptive theories. While adaptive theories propose that errors trigger a cascade of processes that will result in improved behavior after error commission, maladaptive theories hold that error commission momentarily impairs behavior. Neither group of theories can account for all available data, as different empirical studies find both impaired and improved post-error behavior. This article attempts a synthesis between the predictions made by prominent adaptive and maladaptive theories. Specifically, it is proposed that errors invoke a nonspecific cascade of processing that will rapidly interrupt and inhibit ongoing behavior and cognition, as well as orient attention toward the source of the error. It is proposed that this cascade follows all unexpected action outcomes, not just errors. In the case of errors, this cascade is followed by error-specific, controlled processing, which is specifically aimed at (re)tuning the existing task set. This theory combines existing predictions from maladaptive orienting and bottleneck theories with specific neural mechanisms from the wider field of cognitive control, including from error-specific theories of adaptive post-error processing. The article aims to describe the proposed framework and its implications for post-error slowing and post-error accuracy, propose mechanistic neural circuitry for post-error processing, and derive specific hypotheses for future empirical investigations. © 2017 Society for Psychophysiological Research.

  14. Two-dimensional errors

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements

  15. Part two: Error propagation

    International Nuclear Information System (INIS)

    Picard, R.R.

    1989-01-01

    Topics covered in this chapter include a discussion of exact results as related to nuclear materials management and accounting in nuclear facilities; propagation of error for a single measured value; propagation of error for several measured values; error propagation for materials balances; and an application of error propagation to an example of uranium hexafluoride conversion process

  16. Learning from Errors

    OpenAIRE

    Martínez-Legaz, Juan Enrique; Soubeyran, Antoine

    2003-01-01

    We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.

  17. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  18. Performance, postmodernity and errors

    DEFF Research Database (Denmark)

    Harder, Peter

    2013-01-01

    speaker’s competency (note the –y ending!) reflects adaptation to the community langue, including variations. This reversal of perspective also reverses our understanding of the relationship between structure and deviation. In the heyday of structuralism, it was tempting to confuse the invariant system...... with the prestige variety, and conflate non-standard variation with parole/performance and class both as erroneous. Nowadays the anti-structural sentiment of present-day linguistics makes it tempting to confuse the rejection of ideal abstract structure with a rejection of any distinction between grammatical...... as deviant from the perspective of function-based structure and discuss to what extent the recognition of a community langue as a source of adaptive pressure may throw light on different types of deviation, including language handicaps and learner errors....

  19. Biofuels Sustainability Criteria. Relevant issues to the proposed Directive on the promotion of the use of energy from renewable sources. (COM(2008) 30 final). Consolidated study

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Francis X.; Roman, Mikael (Stockholm Environment Institute, SE-10691 Stockholm (Sweden)) (and others)

    2008-06-15

    The role envisioned for liquid biofuels for transport has come under increased scrutiny in the past year or two, due to the potential social and environmental impacts associated with scaling up biofuels production and use from its low level - currently representing about 1% of transport fuels globally. The proposed EU Directive setting a target of 10% biofuels in transport sector by 2020 has therefore raised a number of concerns. The concerns about sustainability are addressed within the proposed Directive through criteria related mainly to GHG emissions, but also to biodiversity and other environmental impacts. The use of first generation biofuels in temperate climates is land-intensive and inefficient in technical terms, whereas first generation biofuels in tropical climates and second generation biofuels in general - offer a much more effective use of land resources. The use of GHG reduction criteria can provide incentives for producers to rely on the most productive feedstocks when sourcing biofuels for the EU market, which will often mean import of biofuels. A threshold of 50% or more would tend to eliminate many of the first generation biofuels produced in temperate climates. Member States should be encouraged to link financial incentives to the GHG reduction capabilities. Moreover, such incentives could be better linked to development cooperation in the case of imports, so as to insure that Least Developed Countries (i.e. in Africa) can gain access to larger markets rather than only the major producers such as Brazil. The calculation of GHG emissions associated with biofuels is complicated by the addition of factors associated with land use change, since the GHG impacts of land use change are beset by uncertainty both in physical terms as well as in the attribution of particular changes to production of particular biofuels. A further complication is introduced when indirect land use changes are incorporated, since these occur through combinations of market

  20. Meteorological Error Budget Using Open Source Data

    Science.gov (United States)

    2016-09-01

    VBA ) script was created that would read the model- based output and corresponding sounding data for each message type (METCM or METB3), output type...data from the raw data via the use of the Standard Query Language ( SQL ) inherent in databases. Noting that the bias data are the differences between...model and sounding at each line or zone value, we link the model and sounding data for a given message type (for example, METB3) together using a SQL

  1. Field error lottery

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  2. Prescription Errors in Psychiatry

    African Journals Online (AJOL)

    Arun Kumar Agnihotri

    clinical pharmacists in detecting errors before they have a (sometimes serious) clinical impact should not be underestimated. Research on medication error in mental health care is limited. .... participation in ward rounds and adverse drug.

  3. Error Detection and Error Classification: Failure Awareness in Data Transfer Scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Louisiana State University; Balman, Mehmet; Kosar, Tevfik

    2010-10-27

    Data transfer in distributed environment is prone to frequent failures resulting from back-end system level problems, like connectivity failure which is technically untraceable by users. Error messages are not logged efficiently, and sometimes are not relevant/useful from users point-of-view. Our study explores the possibility of an efficient error detection and reporting system for such environments. Prior knowledge about the environment and awareness of the actual reason behind a failure would enable higher level planners to make better and accurate decisions. It is necessary to have well defined error detection and error reporting methods to increase the usability and serviceability of existing data transfer protocols and data management systems. We investigate the applicability of early error detection and error classification techniques and propose an error reporting framework and a failure-aware data transfer life cycle to improve arrangement of data transfer operations and to enhance decision making of data transfer schedulers.

  4. Errors in otology.

    Science.gov (United States)

    Kartush, J M

    1996-11-01

    Practicing medicine successfully requires that errors in diagnosis and treatment be minimized. Malpractice laws encourage litigators to ascribe all medical errors to incompetence and negligence. There are, however, many other causes of unintended outcomes. This article describes common causes of errors and suggests ways to minimize mistakes in otologic practice. Widespread dissemination of knowledge about common errors and their precursors can reduce the incidence of their occurrence. Consequently, laws should be passed to allow for a system of non-punitive, confidential reporting of errors and "near misses" that can be shared by physicians nationwide.

  5. Investigation of systematic errors of metastable "atomic pair" number

    CERN Document Server

    Yazkov, V

    2015-01-01

    Sources of systematic errors in analysis of data, collected in 2012, are analysed. Esti- mations of systematic errors in a number of “atomic pairs” fr om metastable π + π − atoms are presented.

  6. Why relevance theory is relevant for lexicography

    DEFF Research Database (Denmark)

    Bothma, Theo; Tarp, Sven

    2014-01-01

    This article starts by providing a brief summary of relevance theory in information science in relation to the function theory of lexicography, explaining the different types of relevance, viz. objective system relevance and the subjective types of relevance, i.e. topical, cognitive, situational...... that is very important for lexicography as well as for information science, viz. functional relevance. Since all lexicographic work is ultimately aimed at satisfying users’ information needs, the article then discusses why the lexicographer should take note of all these types of relevance when planning a new...... dictionary project, identifying new tasks and responsibilities of the modern lexicographer. The article furthermore discusses how relevance theory impacts on teaching dictionary culture and reference skills. By integrating insights from lexicography and information science, the article contributes to new...

  7. Error monitoring issues for common channel signaling

    Science.gov (United States)

    Hou, Victor T.; Kant, Krishna; Ramaswami, V.; Wang, Jonathan L.

    1994-04-01

    Motivated by field data which showed a large number of link changeovers and incidences of link oscillations between in-service and out-of-service states in common channel signaling (CCS) networks, a number of analyses of the link error monitoring procedures in the SS7 protocol were performed by the authors. This paper summarizes the results obtained thus far and include the following: (1) results of an exact analysis of the performance of the error monitoring procedures under both random and bursty errors; (2) a demonstration that there exists a range of error rates within which the error monitoring procedures of SS7 may induce frequent changeovers and changebacks; (3) an analysis of the performance ofthe SS7 level-2 transmission protocol to determine the tolerable error rates within which the delay requirements can be met; (4) a demonstration that the tolerable error rate depends strongly on various link and traffic characteristics, thereby implying that a single set of error monitor parameters will not work well in all situations; (5) some recommendations on a customizable/adaptable scheme of error monitoring with a discussion on their implementability. These issues may be particularly relevant in the presence of anticipated increases in SS7 traffic due to widespread deployment of Advanced Intelligent Network (AIN) and Personal Communications Service (PCS) as well as for developing procedures for high-speed SS7 links currently under consideration by standards bodies.

  8. Collection of offshore human error probability data

    International Nuclear Information System (INIS)

    Basra, Gurpreet; Kirwan, Barry

    1998-01-01

    Accidents such as Piper Alpha have increased concern about the effects of human errors in complex systems. Such accidents can in theory be predicted and prevented by risk assessment, and in particular human reliability assessment (HRA), but HRA ideally requires qualitative and quantitative human error data. A research initiative at the University of Birmingham led to the development of CORE-DATA, a Computerised Human Error Data Base. This system currently contains a reasonably large number of human error data points, collected from a variety of mainly nuclear-power related sources. This article outlines a recent offshore data collection study, concerned with collecting lifeboat evacuation data. Data collection methods are outlined and a selection of human error probabilities generated as a result of the study are provided. These data give insights into the type of errors and human failure rates that could be utilised to support offshore risk analyses

  9. A Comparative Study on Error Analysis

    DEFF Research Database (Denmark)

    Wu, Xiaoli; Zhang, Chun

    2015-01-01

    Title: A Comparative Study on Error Analysis Subtitle: - Belgian (L1) and Danish (L1) learners’ use of Chinese (L2) comparative sentences in written production Xiaoli Wu, Chun Zhang Abstract: Making errors is an inevitable and necessary part of learning. The collection, classification and analysis...... the occurrence of errors either in linguistic or pedagogical terms. The purpose of the current study is to demonstrate the theoretical and practical relevance of error analysis approach in CFL by investigating two cases - (1) Belgian (L1) learners’ use of Chinese (L2) comparative sentences in written production...... of errors in the written and spoken production of L2 learners has a long tradition in L2 pedagogy. Yet, in teaching and learning Chinese as a foreign language (CFL), only handful studies have been made either to define the ‘error’ in a pedagogically insightful way or to empirically investigate...

  10. The error in total error reduction.

    Science.gov (United States)

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2014-02-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Error Modeling and Design Optimization of Parallel Manipulators

    DEFF Research Database (Denmark)

    Wu, Guanglei

    /backlash, manufacturing and assembly errors and joint clearances. From the error prediction model, the distributions of the pose errors due to joint clearances are mapped within its constant-orientation workspace and the correctness of the developed model is validated experimentally. ix Additionally, using the screw......, dynamic modeling etc. Next, the rst-order dierential equation of the kinematic closure equation of planar parallel manipulator is obtained to develop its error model both in Polar and Cartesian coordinate systems. The established error model contains the error sources of actuation error...

  12. Errors in Neonatology

    OpenAIRE

    Antonio Boldrini; Rosa T. Scaramuzzo; Armando Cuttano

    2013-01-01

    Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy). Results: In Neonatology the main err...

  13. The developmental clock of dental enamel: a test for the periodicity of prism cross-striations in modern humans and an evaluation of the most likely sources of error in histological studies of this kind

    Science.gov (United States)

    Antoine, Daniel; Hillson, Simon; Dean, M Christopher

    2009-01-01

    Dental tissues contain regular microscopic structures believed to result from periodic variations in the secretion of matrix by enamel- and dentine-forming cells. Counts of these structures are an important tool for reconstructing the chronology of dental development in both modern and fossil hominids. Most studies rely on the periodicity of the regular cross-banding that occurs along the long axis of enamel prisms. These prism cross-striations are widely thought to reflect a circadian rhythm of enamel matrix secretion and are generally regarded as representing daily increments of tissue. Previously, some researchers have argued against the circadian periodicity of these structures and questioned their use in reconstructing dental development. Here we tested the periodicity of enamel cross-striations – and the accuracy to which they can be used – in the developing permanent dentition of five children, excavated from a 19th century crypt in London, whose age-at-death was independently known. The interruption of crown formation by death was used to calibrate cross-striation counts. All five individuals produced counts that were strongly consistent with those expected from the independently known ages, taking into account the position of the neonatal line and factors of preservation. These results confirm that cross-striations do indeed reflect a circadian rhythm in enamel matrix secretion. They further validate their use in reconstructing dental development and in determining the age-at-death of the remains of children whose dentitions are still forming at the time of death. Significantly they identify the most likely source of error and the common difficulties encountered in histological studies of this kind. PMID:19166472

  14. Systematic Procedural Error

    National Research Council Canada - National Science Library

    Byrne, Michael D

    2006-01-01

    .... This problem has received surprisingly little attention from cognitive psychologists. The research summarized here examines such errors in some detail both empirically and through computational cognitive modeling...

  15. Human errors and mistakes

    International Nuclear Information System (INIS)

    Wahlstroem, B.

    1993-01-01

    Human errors have a major contribution to the risks for industrial accidents. Accidents have provided important lesson making it possible to build safer systems. In avoiding human errors it is necessary to adapt the systems to their operators. The complexity of modern industrial systems is however increasing the danger of system accidents. Models of the human operator have been proposed, but the models are not able to give accurate predictions of human performance. Human errors can never be eliminated, but their frequency can be decreased by systematic efforts. The paper gives a brief summary of research in human error and it concludes with suggestions for further work. (orig.)

  16. Deep learning relevance

    DEFF Research Database (Denmark)

    Lioma, Christina; Larsen, Birger; Petersen, Casper

    2016-01-01

    train a Recurrent Neural Network (RNN) on existing relevant information to that query. We then use the RNN to "deep learn" a single, synthetic, and we assume, relevant document for that query. We design a crowdsourcing experiment to assess how relevant the "deep learned" document is, compared...... to existing relevant documents. Users are shown a query and four wordclouds (of three existing relevant documents and our deep learned synthetic document). The synthetic document is ranked on average most relevant of all....

  17. Clinical errors and medical negligence.

    Science.gov (United States)

    Oyebode, Femi

    2013-01-01

    This paper discusses the definition, nature and origins of clinical errors including their prevention. The relationship between clinical errors and medical negligence is examined as are the characteristics of litigants and events that are the source of litigation. The pattern of malpractice claims in different specialties and settings is examined. Among hospitalized patients worldwide, 3-16% suffer injury as a result of medical intervention, the most common being the adverse effects of drugs. The frequency of adverse drug effects appears superficially to be higher in intensive care units and emergency departments but once rates have been corrected for volume of patients, comorbidity of conditions and number of drugs prescribed, the difference is not significant. It is concluded that probably no more than 1 in 7 adverse events in medicine result in a malpractice claim and the factors that predict that a patient will resort to litigation include a prior poor relationship with the clinician and the feeling that the patient is not being kept informed. Methods for preventing clinical errors are still in their infancy. The most promising include new technologies such as electronic prescribing systems, diagnostic and clinical decision-making aids and error-resistant systems. Copyright © 2013 S. Karger AG, Basel.

  18. Learning from Errors

    Science.gov (United States)

    Metcalfe, Janet

    2017-01-01

    Although error avoidance during learning appears to be the rule in American classrooms, laboratory studies suggest that it may be a counterproductive strategy, at least for neurologically typical students. Experimental investigations indicate that errorful learning followed by corrective feedback is beneficial to learning. Interestingly, the…

  19. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  20. Optimal full motion video registration with rigorous error propagation

    Science.gov (United States)

    Dolloff, John; Hottel, Bryant; Doucette, Peter; Theiss, Henry; Jocher, Glenn

    2014-06-01

    Optimal full motion video (FMV) registration is a crucial need for the Geospatial community. It is required for subsequent and optimal geopositioning with simultaneous and reliable accuracy prediction. An overall approach being developed for such registration is presented that models relevant error sources in terms of the expected magnitude and correlation of sensor errors. The corresponding estimator is selected based on the level of accuracy of the a priori information of the sensor's trajectory and attitude (pointing) information, in order to best deal with non-linearity effects. Estimator choices include near real-time Kalman Filters and batch Weighted Least Squares. Registration solves for corrections to the sensor a priori information for each frame. It also computes and makes available a posteriori accuracy information, i.e., the expected magnitude and correlation of sensor registration errors. Both the registered sensor data and its a posteriori accuracy information are then made available to "down-stream" Multi-Image Geopositioning (MIG) processes. An object of interest is then measured on the registered frames and a multi-image optimal solution, including reliable predicted solution accuracy, is then performed for the object's 3D coordinates. This paper also describes a robust approach to registration when a priori information of sensor attitude is unavailable. It makes use of structure-from-motion principles, but does not use standard Computer Vision techniques, such as estimation of the Essential Matrix which can be very sensitive to noise. The approach used instead is a novel, robust, direct search-based technique.

  1. Uncorrected refractive errors.

    Science.gov (United States)

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  2. Uncorrected refractive errors

    Directory of Open Access Journals (Sweden)

    Kovin S Naidoo

    2012-01-01

    Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.

  3. Characterization of identification errors and uses in localization of poor modal correlation

    Science.gov (United States)

    Martin, Guillaume; Balmes, Etienne; Chancelier, Thierry

    2017-05-01

    While modal identification is a mature subject, very few studies address the characterization of errors associated with components of a mode shape. This is particularly important in test/analysis correlation procedures, where the Modal Assurance Criterion is used to pair modes and to localize at which sensors discrepancies occur. Poor correlation is usually attributed to modeling errors, but clearly identification errors also occur. In particular with 3D Scanning Laser Doppler Vibrometer measurement, many transfer functions are measured. As a result individual validation of each measurement cannot be performed manually in a reasonable time frame and a notable fraction of measurements is expected to be fairly noisy leading to poor identification of the associated mode shape components. The paper first addresses measurements and introduces multiple criteria. The error measures the difference between test and synthesized transfer functions around each resonance and can be used to localize poorly identified modal components. For intermediate error values, diagnostic of the origin of the error is needed. The level evaluates the transfer function amplitude in the vicinity of a given mode and can be used to eliminate sensors with low responses. A Noise Over Signal indicator, product of error and level, is then shown to be relevant to detect poorly excited modes and errors due to modal property shifts between test batches. Finally, a contribution is introduced to evaluate the visibility of a mode in each transfer. Using tests on a drum brake component, these indicators are shown to provide relevant insight into the quality of measurements. In a second part, test/analysis correlation is addressed with a focus on the localization of sources of poor mode shape correlation. The MACCo algorithm, which sorts sensors by the impact of their removal on a MAC computation, is shown to be particularly relevant. Combined with the error it avoids keeping erroneous modal components

  4. Human errors related to maintenance and modifications

    International Nuclear Information System (INIS)

    Laakso, K.; Pyy, P.; Reiman, L.

    1998-01-01

    about weakness in audits made by the operating organisation and in tests relating to plant operation. The number of plant-specific maintenance records used as input material was high and the findings were discussed thoroughly with the plant maintenance personnel. The results indicated that instrumentation is more prone to human error than the rest of maintenance. Most errors stem from refuelling outage periods and about a half of them were identified during the same outage they were committed. Plant modifications are a significant source of common cause failures. The number of dependent errors could be reduced by improved co-ordination and auditing, post-installation checking, training and start-up testing programmes. (orig.)

  5. Identifying systematic DFT errors in catalytic reactions

    DEFF Research Database (Denmark)

    Christensen, Rune; Hansen, Heine Anton; Vegge, Tejs

    2015-01-01

    Using CO2 reduction reactions as examples, we present a widely applicable method for identifying the main source of errors in density functional theory (DFT) calculations. The method has broad applications for error correction in DFT calculations in general, as it relies on the dependence...... of the applied exchange–correlation functional on the reaction energies rather than on errors versus the experimental data. As a result, improved energy corrections can now be determined for both gas phase and adsorbed reaction species, particularly interesting within heterogeneous catalysis. We show...... that for the CO2 reduction reactions, the main source of error is associated with the C[double bond, length as m-dash]O bonds and not the typically energy corrected OCO backbone....

  6. Understanding and Confronting Our Mistakes: The Epidemiology of Error in Radiology and Strategies for Error Reduction.

    Science.gov (United States)

    Bruno, Michael A; Walker, Eric A; Abujudeh, Hani H

    2015-10-01

    Arriving at a medical diagnosis is a highly complex process that is extremely error prone. Missed or delayed diagnoses often lead to patient harm and missed opportunities for treatment. Since medical imaging is a major contributor to the overall diagnostic process, it is also a major potential source of diagnostic error. Although some diagnoses may be missed because of the technical or physical limitations of the imaging modality, including image resolution, intrinsic or extrinsic contrast, and signal-to-noise ratio, most missed radiologic diagnoses are attributable to image interpretation errors by radiologists. Radiologic interpretation cannot be mechanized or automated; it is a human enterprise based on complex psychophysiologic and cognitive processes and is itself subject to a wide variety of error types, including perceptual errors (those in which an important abnormality is simply not seen on the images) and cognitive errors (those in which the abnormality is visually detected but the meaning or importance of the finding is not correctly understood or appreciated). The overall prevalence of radiologists' errors in practice does not appear to have changed since it was first estimated in the 1960s. The authors review the epidemiology of errors in diagnostic radiology, including a recently proposed taxonomy of radiologists' errors, as well as research findings, in an attempt to elucidate possible underlying causes of these errors. The authors also propose strategies for error reduction in radiology. On the basis of current understanding, specific suggestions are offered as to how radiologists can improve their performance in practice. © RSNA, 2015.

  7. Preventing Errors in Laterality

    OpenAIRE

    Landau, Elliot; Hirschorn, David; Koutras, Iakovos; Malek, Alexander; Demissie, Seleshie

    2014-01-01

    An error in laterality is the reporting of a finding that is present on the right side as on the left or vice versa. While different medical and surgical specialties have implemented protocols to help prevent such errors, very few studies have been published that describe these errors in radiology reports and ways to prevent them. We devised a system that allows the radiologist to view reports in a separate window, displayed in a simple font and with all terms of laterality highlighted in sep...

  8. Errors and violations

    International Nuclear Information System (INIS)

    Reason, J.

    1988-01-01

    This paper is in three parts. The first part summarizes the human failures responsible for the Chernobyl disaster and argues that, in considering the human contribution to power plant emergencies, it is necessary to distinguish between: errors and violations; and active and latent failures. The second part presents empirical evidence, drawn from driver behavior, which suggest that errors and violations have different psychological origins. The concluding part outlines a resident pathogen view of accident causation, and seeks to identify the various system pathways along which errors and violations may be propagated

  9. Accounting for optical errors in microtensiometry.

    Science.gov (United States)

    Hinton, Zachary R; Alvarez, Nicolas J

    2018-09-15

    Drop shape analysis (DSA) techniques measure interfacial tension subject to error in image analysis and the optical system. While considerable efforts have been made to minimize image analysis errors, very little work has treated optical errors. There are two main sources of error when considering the optical system: the angle of misalignment and the choice of focal plane. Due to the convoluted nature of these sources, small angles of misalignment can lead to large errors in measured curvature. We demonstrate using microtensiometry the contributions of these sources to measured errors in radius, and, more importantly, deconvolute the effects of misalignment and focal plane. Our findings are expected to have broad implications on all optical techniques measuring interfacial curvature. A geometric model is developed to analytically determine the contributions of misalignment angle and choice of focal plane on measurement error for spherical cap interfaces. This work utilizes a microtensiometer to validate the geometric model and to quantify the effect of both sources of error. For the case of a microtensiometer, an empirical calibration is demonstrated that corrects for optical errors and drastically simplifies implementation. The combination of geometric modeling and experimental results reveal a convoluted relationship between the true and measured interfacial radius as a function of the misalignment angle and choice of focal plane. The validated geometric model produces a full operating window that is strongly dependent on the capillary radius and spherical cap height. In all cases, the contribution of optical errors is minimized when the height of the spherical cap is equivalent to the capillary radius, i.e. a hemispherical interface. The understanding of these errors allow for correct measure of interfacial curvature and interfacial tension regardless of experimental setup. For the case of microtensiometry, this greatly decreases the time for experimental setup

  10. Help prevent hospital errors

    Science.gov (United States)

    ... this page: //medlineplus.gov/ency/patientinstructions/000618.htm Help prevent hospital errors To use the sharing features ... in the hospital. If You Are Having Surgery, Help Keep Yourself Safe Go to a hospital you ...

  11. Pedal Application Errors

    Science.gov (United States)

    2012-03-01

    This project examined the prevalence of pedal application errors and the driver, vehicle, roadway and/or environmental characteristics associated with pedal misapplication crashes based on a literature review, analysis of news media reports, a panel ...

  12. Rounding errors in weighing

    International Nuclear Information System (INIS)

    Jeach, J.L.

    1976-01-01

    When rounding error is large relative to weighing error, it cannot be ignored when estimating scale precision and bias from calibration data. Further, if the data grouping is coarse, rounding error is correlated with weighing error and may also have a mean quite different from zero. These facts are taken into account in a moment estimation method. A copy of the program listing for the MERDA program that provides moment estimates is available from the author. Experience suggests that if the data fall into four or more cells or groups, it is not necessary to apply the moment estimation method. Rather, the estimate given by equation (3) is valid in this instance. 5 tables

  13. Spotting software errors sooner

    International Nuclear Information System (INIS)

    Munro, D.

    1989-01-01

    Static analysis is helping to identify software errors at an earlier stage and more cheaply than conventional methods of testing. RTP Software's MALPAS system also has the ability to check that a code conforms to its original specification. (author)

  14. Errors in energy bills

    International Nuclear Information System (INIS)

    Kop, L.

    2001-01-01

    On request, the Dutch Association for Energy, Environment and Water (VEMW) checks the energy bills for her customers. It appeared that in the year 2000 many small, but also big errors were discovered in the bills of 42 businesses

  15. Medical Errors Reduction Initiative

    National Research Council Canada - National Science Library

    Mutter, Michael L

    2005-01-01

    The Valley Hospital of Ridgewood, New Jersey, is proposing to extend a limited but highly successful specimen management and medication administration medical errors reduction initiative on a hospital-wide basis...

  16. The surveillance error grid.

    Science.gov (United States)

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  17. Magnetic Nanoparticle Thermometer: An Investigation of Minimum Error Transmission Path and AC Bias Error

    Directory of Open Access Journals (Sweden)

    Zhongzhou Du

    2015-04-01

    Full Text Available The signal transmission module of a magnetic nanoparticle thermometer (MNPT was established in this study to analyze the error sources introduced during the signal flow in the hardware system. The underlying error sources that significantly affected the precision of the MNPT were determined through mathematical modeling and simulation. A transfer module path with the minimum error in the hardware system was then proposed through the analysis of the variations of the system error caused by the significant error sources when the signal flew through the signal transmission module. In addition, a system parameter, named the signal-to-AC bias ratio (i.e., the ratio between the signal and AC bias, was identified as a direct determinant of the precision of the measured temperature. The temperature error was below 0.1 K when the signal-to-AC bias ratio was higher than 80 dB, and other system errors were not considered. The temperature error was below 0.1 K in the experiments with a commercial magnetic fluid (Sample SOR-10, Ocean Nanotechnology, Springdale, AR, USA when the hardware system of the MNPT was designed with the aforementioned method.

  18. Design for Error Tolerance

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1983-01-01

    An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability.......An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability....

  19. Apologies and Medical Error

    Science.gov (United States)

    2008-01-01

    One way in which physicians can respond to a medical error is to apologize. Apologies—statements that acknowledge an error and its consequences, take responsibility, and communicate regret for having caused harm—can decrease blame, decrease anger, increase trust, and improve relationships. Importantly, apologies also have the potential to decrease the risk of a medical malpractice lawsuit and can help settle claims by patients. Patients indicate they want and expect explanations and apologies after medical errors and physicians indicate they want to apologize. However, in practice, physicians tend to provide minimal information to patients after medical errors and infrequently offer complete apologies. Although fears about potential litigation are the most commonly cited barrier to apologizing after medical error, the link between litigation risk and the practice of disclosure and apology is tenuous. Other barriers might include the culture of medicine and the inherent psychological difficulties in facing one’s mistakes and apologizing for them. Despite these barriers, incorporating apology into conversations between physicians and patients can address the needs of both parties and can play a role in the effective resolution of disputes related to medical error. PMID:18972177

  20. Thermodynamics of Error Correction

    Directory of Open Access Journals (Sweden)

    Pablo Sartori

    2015-12-01

    Full Text Available Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  1. Managing organizational errors: Three theoretical lenses on a bank collapse

    OpenAIRE

    Giolito, Vincent

    2015-01-01

    Errors have been shown to be a major source of organizational disasters, yet scant research has paid attention to the management of errors that is, what managers do once errors have occurred and how actions may determine outcomes. In an early attempt to build a theory of the management of organizational errors, this paper examines how extant theory applies to the collapse of a bank. The financial industry was chosen because of the systemic risks it entails, as demonstrated by the financial cr...

  2. Spectrum of diagnostic errors in radiology.

    Science.gov (United States)

    Pinto, Antonio; Brunese, Luca

    2010-10-28

    Diagnostic errors are important in all branches of medicine because they are an indication of poor patient care. Since the early 1970s, physicians have been subjected to an increasing number of medical malpractice claims. Radiology is one of the specialties most liable to claims of medical negligence. Most often, a plaintiff's complaint against a radiologist will focus on a failure to diagnose. The etiology of radiological error is multi-factorial. Errors fall into recurrent patterns. Errors arise from poor technique, failures of perception, lack of knowledge and misjudgments. The work of diagnostic radiology consists of the complete detection of all abnormalities in an imaging examination and their accurate diagnosis. Every radiologist should understand the sources of error in diagnostic radiology as well as the elements of negligence that form the basis of malpractice litigation. Error traps need to be uncovered and highlighted, in order to prevent repetition of the same mistakes. This article focuses on the spectrum of diagnostic errors in radiology, including a classification of the errors, and stresses the malpractice issues in mammography, chest radiology and obstetric sonography. Missed fractures in emergency and communication issues between radiologists and physicians are also discussed.

  3. Learning mechanisms to limit medication administration errors.

    Science.gov (United States)

    Drach-Zahavy, Anat; Pud, Dorit

    2010-04-01

    This paper is a report of a study conducted to identify and test the effectiveness of learning mechanisms applied by the nursing staff of hospital wards as a means of limiting medication administration errors. Since the influential report ;To Err Is Human', research has emphasized the role of team learning in reducing medication administration errors. Nevertheless, little is known about the mechanisms underlying team learning. Thirty-two hospital wards were randomly recruited. Data were collected during 2006 in Israel by a multi-method (observations, interviews and administrative data), multi-source (head nurses, bedside nurses) approach. Medication administration error was defined as any deviation from procedures, policies and/or best practices for medication administration, and was identified using semi-structured observations of nurses administering medication. Organizational learning was measured using semi-structured interviews with head nurses, and the previous year's reported medication administration errors were assessed using administrative data. The interview data revealed four learning mechanism patterns employed in an attempt to learn from medication administration errors: integrated, non-integrated, supervisory and patchy learning. Regression analysis results demonstrated that whereas the integrated pattern of learning mechanisms was associated with decreased errors, the non-integrated pattern was associated with increased errors. Supervisory and patchy learning mechanisms were not associated with errors. Superior learning mechanisms are those that represent the whole cycle of team learning, are enacted by nurses who administer medications to patients, and emphasize a system approach to data analysis instead of analysis of individual cases.

  4. Learning from Errors

    Directory of Open Access Journals (Sweden)

    MA. Lendita Kryeziu

    2015-06-01

    Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.

  5. Compact disk error measurements

    Science.gov (United States)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  6. MEDICAL ERROR: CIVIL AND LEGAL ASPECT.

    Science.gov (United States)

    Buletsa, S; Drozd, O; Yunin, O; Mohilevskyi, L

    2018-03-01

    The scientific article is focused on the research of the notion of medical error, medical and legal aspects of this notion have been considered. The necessity of the legislative consolidation of the notion of «medical error» and criteria of its legal estimation have been grounded. In the process of writing a scientific article, we used the empirical method, general scientific and comparative legal methods. A comparison of the concept of medical error in civil and legal aspects was made from the point of view of Ukrainian, European and American scientists. It has been marked that the problem of medical errors is known since ancient times and in the whole world, in fact without regard to the level of development of medicine, there is no country, where doctors never make errors. According to the statistics, medical errors in the world are included in the first five reasons of death rate. At the same time the grant of medical services practically concerns all people. As a man and his life, health in Ukraine are acknowledged by a higher social value, medical services must be of high-quality and effective. The grant of not quality medical services causes harm to the health, and sometimes the lives of people; it may result in injury or even death. The right to the health protection is one of the fundamental human rights assured by the Constitution of Ukraine; therefore the issue of medical errors and liability for them is extremely relevant. The authors make conclusions, that the definition of the notion of «medical error» must get the legal consolidation. Besides, the legal estimation of medical errors must be based on the single principles enshrined in the legislation and confirmed by judicial practice.

  7. Wind speed errors for LIDARs and SODARs in complex terrain

    International Nuclear Information System (INIS)

    Bradley, S

    2008-01-01

    All commercial LIDARs and SODARs are monostatic and hence sample distributed volumes to construct wind vector components. We use an analytic potential flow model to estimate errors arising for a range of LIDAR and SODAR configurations on hills and escarpments. Wind speed errors peak at a height relevant to wind turbines and can be typically 20%

  8. Wind speed errors for LIDARs and SODARs in complex terrain

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, S [Physics Department, The University of Auckland, Private Bag 92019, Auckland (New Zealand) and School of Computing, Science and Engineering, University of Salford, M5 4WT (United Kingdom)], E-mail: s.bradley@auckland.ac.nz

    2008-05-01

    All commercial LIDARs and SODARs are monostatic and hence sample distributed volumes to construct wind vector components. We use an analytic potential flow model to estimate errors arising for a range of LIDAR and SODAR configurations on hills and escarpments. Wind speed errors peak at a height relevant to wind turbines and can be typically 20%.

  9. Errors in Neonatology

    Directory of Open Access Journals (Sweden)

    Antonio Boldrini

    2013-06-01

    Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research

  10. NLO error propagation exercise: statistical results

    International Nuclear Information System (INIS)

    Pack, D.J.; Downing, D.J.

    1985-09-01

    Error propagation is the extrapolation and cumulation of uncertainty (variance) above total amounts of special nuclear material, for example, uranium or 235 U, that are present in a defined location at a given time. The uncertainty results from the inevitable inexactness of individual measurements of weight, uranium concentration, 235 U enrichment, etc. The extrapolated and cumulated uncertainty leads directly to quantified limits of error on inventory differences (LEIDs) for such material. The NLO error propagation exercise was planned as a field demonstration of the utilization of statistical error propagation methodology at the Feed Materials Production Center in Fernald, Ohio from April 1 to July 1, 1983 in a single material balance area formed specially for the exercise. Major elements of the error propagation methodology were: variance approximation by Taylor Series expansion; variance cumulation by uncorrelated primary error sources as suggested by Jaech; random effects ANOVA model estimation of variance effects (systematic error); provision for inclusion of process variance in addition to measurement variance; and exclusion of static material. The methodology was applied to material balance area transactions from the indicated time period through a FORTRAN computer code developed specifically for this purpose on the NLO HP-3000 computer. This paper contains a complete description of the error propagation methodology and a full summary of the numerical results of applying the methodlogy in the field demonstration. The error propagation LEIDs did encompass the actual uranium and 235 U inventory differences. Further, one can see that error propagation actually provides guidance for reducing inventory differences and LEIDs in future time periods

  11. LIBERTARISMO & ERROR CATEGORIAL

    Directory of Open Access Journals (Sweden)

    Carlos G. Patarroyo G.

    2009-01-01

    Full Text Available En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibilidad de la libertad humana no necesariamente puede ser acusado de incurrir en ellos.

  12. Libertarismo & Error Categorial

    OpenAIRE

    PATARROYO G, CARLOS G

    2009-01-01

    En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibili...

  13. Error Free Software

    Science.gov (United States)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  14. SPACE-BORNE LASER ALTIMETER GEOLOCATION ERROR ANALYSIS

    Directory of Open Access Journals (Sweden)

    Y. Wang

    2018-05-01

    Full Text Available This paper reviews the development of space-borne laser altimetry technology over the past 40 years. Taking the ICESAT satellite as an example, a rigorous space-borne laser altimeter geolocation model is studied, and an error propagation equation is derived. The influence of the main error sources, such as the platform positioning error, attitude measurement error, pointing angle measurement error and range measurement error, on the geolocation accuracy of the laser spot are analysed by simulated experiments. The reasons for the different influences on geolocation accuracy in different directions are discussed, and to satisfy the accuracy of the laser control point, a design index for each error source is put forward.

  15. First order error corrections in common introductory physics experiments

    Science.gov (United States)

    Beckey, Jacob; Baker, Andrew; Aravind, Vasudeva; Clarion Team

    As a part of introductory physics courses, students perform different standard lab experiments. Almost all of these experiments are prone to errors owing to factors like friction, misalignment of equipment, air drag, etc. Usually these types of errors are ignored by students and not much thought is paid to the source of these errors. However, paying attention to these factors that give rise to errors help students make better physics models and understand physical phenomena behind experiments in more detail. In this work, we explore common causes of errors in introductory physics experiment and suggest changes that will mitigate the errors, or suggest models that take the sources of these errors into consideration. This work helps students build better and refined physical models and understand physics concepts in greater detail. We thank Clarion University undergraduate student grant for financial support involving this project.

  16. Simulator data on human error probabilities

    International Nuclear Information System (INIS)

    Kozinsky, E.J.; Guttmann, H.E.

    1982-01-01

    Analysis of operator errors on NPP simulators is being used to determine Human Error Probabilities (HEP) for task elements defined in NUREG/CR 1278. Simulator data tapes from research conducted by EPRI and ORNL are being analyzed for operator error rates. The tapes collected, using Performance Measurement System software developed for EPRI, contain a history of all operator manipulations during simulated casualties. Analysis yields a time history or Operational Sequence Diagram and a manipulation summary, both stored in computer data files. Data searches yield information on operator errors of omission and commission. This work experimentally determines HEPs for Probabilistic Risk Assessment calculations. It is the only practical experimental source of this data to date

  17. Simulator data on human error probabilities

    International Nuclear Information System (INIS)

    Kozinsky, E.J.; Guttmann, H.E.

    1981-01-01

    Analysis of operator errors on NPP simulators is being used to determine Human Error Probabilities (HEP) for task elements defined in NUREG/CR-1278. Simulator data tapes from research conducted by EPRI and ORNL are being analyzed for operator error rates. The tapes collected, using Performance Measurement System software developed for EPRI, contain a history of all operator manipulations during simulated casualties. Analysis yields a time history or Operational Sequence Diagram and a manipulation summary, both stored in computer data files. Data searches yield information on operator errors of omission and commission. This work experimentally determined HEP's for Probabilistic Risk Assessment calculations. It is the only practical experimental source of this data to date

  18. Human error in remote Afterloading Brachytherapy

    International Nuclear Information System (INIS)

    Quinn, M.L.; Callan, J.; Schoenfeld, I.; Serig, D.

    1994-01-01

    Remote Afterloading Brachytherapy (RAB) is a medical process used in the treatment of cancer. RAB uses a computer-controlled device to remotely insert and remove radioactive sources close to a target (or tumor) in the body. Some RAB problems affecting the radiation dose to the patient have been reported and attributed to human error. To determine the root cause of human error in the RAB system, a human factors team visited 23 RAB treatment sites in the US. The team observed RAB treatment planning and delivery, interviewed RAB personnel, and performed walk-throughs, during which staff demonstrated the procedures and practices used in performing RAB tasks. Factors leading to human error in the RAB system were identified. The impact of those factors on the performance of RAB was then evaluated and prioritized in terms of safety significance. Finally, the project identified and evaluated alternative approaches for resolving the safety significant problems related to human error

  19. Error Correcting Codes

    Indian Academy of Sciences (India)

    Science and Automation at ... the Reed-Solomon code contained 223 bytes of data, (a byte ... then you have a data storage system with error correction, that ..... practical codes, storing such a table is infeasible, as it is generally too large.

  20. Error Correcting Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...

  1. VOLUMETRIC ERROR COMPENSATION IN FIVE-AXIS CNC MACHINING CENTER THROUGH KINEMATICS MODELING OF GEOMETRIC ERROR

    Directory of Open Access Journals (Sweden)

    Pooyan Vahidi Pashsaki

    2016-06-01

    Full Text Available Accuracy of a five-axis CNC machine tool is affected by a vast number of error sources. This paper investigates volumetric error modeling and its compensation to the basis for creation of new tool path for improvement of work pieces accuracy. The volumetric error model of a five-axis machine tool with the configuration RTTTR (tilting head B-axis and rotary table in work piece side A΄ was set up taking into consideration rigid body kinematics and homogeneous transformation matrix, in which 43 error components are included. Volumetric error comprises 43 error components that can separately reduce geometrical and dimensional accuracy of work pieces. The machining accuracy of work piece is guaranteed due to the position of the cutting tool center point (TCP relative to the work piece. The cutting tool is deviated from its ideal position relative to the work piece and machining error is experienced. For compensation process detection of the present tool path and analysis of the RTTTR five-axis CNC machine tools geometrical error, translating current position of component to compensated positions using the Kinematics error model, converting newly created component to new tool paths using the compensation algorithms and finally editing old G-codes using G-code generator algorithm have been employed.

  2. Error management for musicians: an interdisciplinary conceptual framework.

    Science.gov (United States)

    Kruse-Weber, Silke; Parncutt, Richard

    2014-01-01

    Musicians tend to strive for flawless performance and perfection, avoiding errors at all costs. Dealing with errors while practicing or performing is often frustrating and can lead to anger and despair, which can explain musicians' generally negative attitude toward errors and the tendency to aim for flawless learning in instrumental music education. But even the best performances are rarely error-free, and research in general pedagogy and psychology has shown that errors provide useful information for the learning process. Research in instrumental pedagogy is still neglecting error issues; the benefits of risk management (before the error) and error management (during and after the error) are still underestimated. It follows that dealing with errors is a key aspect of music practice at home, teaching, and performance in public. And yet, to be innovative, or to make their performance extraordinary, musicians need to risk errors. Currently, most music students only acquire the ability to manage errors implicitly - or not at all. A more constructive, creative, and differentiated culture of errors would balance error tolerance and risk-taking against error prevention in ways that enhance music practice and music performance. The teaching environment should lay the foundation for the development of such an approach. In this contribution, we survey recent research in aviation, medicine, economics, psychology, and interdisciplinary decision theory that has demonstrated that specific error-management training can promote metacognitive skills that lead to better adaptive transfer and better performance skills. We summarize how this research can be applied to music, and survey-relevant research that is specifically tailored to the needs of musicians, including generic guidelines for risk and error management in music teaching and performance. On this basis, we develop a conceptual framework for risk management that can provide orientation for further music education and

  3. Error management for musicians: an interdisciplinary conceptual framework

    Directory of Open Access Journals (Sweden)

    Silke eKruse-Weber

    2014-07-01

    Full Text Available Musicians tend to strive for flawless performance and perfection, avoiding errors at all costs. Dealing with errors while practicing or performing is often frustrating and can lead to anger and despair, which can explain musicians’ generally negative attitude toward errors and the tendency to aim for errorless learning in instrumental music education. But even the best performances are rarely error-free, and research in general pedagogy and psychology has shown that errors provide useful information for the learning process. Research in instrumental pedagogy is still neglecting error issues; the benefits of risk management (before the error and error management (during and after the error are still underestimated. It follows that dealing with errors is a key aspect of music practice at home, teaching, and performance in public. And yet, to be innovative, or to make their performance extraordinary, musicians need to risk errors. Currently, most music students only acquire the ability to manage errors implicitly - or not at all. A more constructive, creative and differentiated culture of errors would balance error tolerance and risk-taking against error prevention in ways that enhance music practice and music performance. The teaching environment should lay the foundation for the development of these abilities. In this contribution, we survey recent research in aviation, medicine, economics, psychology, and interdisciplinary decision theory that has demonstrated that specific error-management training can promote metacognitive skills that lead to better adaptive transfer and better performance skills. We summarize how this research can be applied to music, and survey relevant research that is specifically tailored to the needs of musicians, including generic guidelines for risk and error management in music teaching and performance. On this basis, we develop a conceptual framework for risk management that can provide orientation for further

  4. Stochastic goal-oriented error estimation with memory

    Science.gov (United States)

    Ackmann, Jan; Marotzke, Jochem; Korn, Peter

    2017-11-01

    We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.

  5. Challenge and Error: Critical Events and Attention-Related Errors

    Science.gov (United States)

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  6. Team errors: definition and taxonomy

    International Nuclear Information System (INIS)

    Sasou, Kunihide; Reason, James

    1999-01-01

    In error analysis or error management, the focus is usually upon individuals who have made errors. In large complex systems, however, most people work in teams or groups. Considering this working environment, insufficient emphasis has been given to 'team errors'. This paper discusses the definition of team errors and its taxonomy. These notions are also applied to events that have occurred in the nuclear power industry, aviation industry and shipping industry. The paper also discusses the relations between team errors and Performance Shaping Factors (PSFs). As a result, the proposed definition and taxonomy are found to be useful in categorizing team errors. The analysis also reveals that deficiencies in communication, resource/task management, excessive authority gradient, excessive professional courtesy will cause team errors. Handling human errors as team errors provides an opportunity to reduce human errors

  7. Prescribing errors in a Brazilian neonatal intensive care unit

    Directory of Open Access Journals (Sweden)

    Ana Paula Cezar Machado

    2015-12-01

    Full Text Available Abstract Pediatric patients, especially those admitted to the neonatal intensive care unit (ICU, are highly vulnerable to medication errors. This study aimed to measure the prescription error rate in a university hospital neonatal ICU and to identify susceptible patients, types of errors, and the medicines involved. The variables related to medicines prescribed were compared to the Neofax prescription protocol. The study enrolled 150 newborns and analyzed 489 prescription order forms, with 1,491 medication items, corresponding to 46 drugs. Prescription error rate was 43.5%. Errors were found in dosage, intervals, diluents, and infusion time, distributed across 7 therapeutic classes. Errors were more frequent in preterm newborns. Diluent and dosing were the most frequent sources of errors. The therapeutic classes most involved in errors were antimicrobial agents and drugs that act on the nervous and cardiovascular systems.

  8. Making Deferred Taxes Relevant

    NARCIS (Netherlands)

    Brouwer, Arjan; Naarding, Ewout

    2018-01-01

    We analyse the conceptual problems in current accounting for deferred taxes and provide solutions derived from the literature in order to make International Financial Reporting Standards (IFRS) deferred tax numbers value-relevant. In our view, the empirical results concerning the value relevance of

  9. Parsimonious relevance models

    NARCIS (Netherlands)

    Meij, E.; Weerkamp, W.; Balog, K.; de Rijke, M.; Myang, S.-H.; Oard, D.W.; Sebastiani, F.; Chua, T.-S.; Leong, M.-K.

    2008-01-01

    We describe a method for applying parsimonious language models to re-estimate the term probabilities assigned by relevance models. We apply our method to six topic sets from test collections in five different genres. Our parsimonious relevance models (i) improve retrieval effectiveness in terms of

  10. Medication errors detected in non-traditional databases

    DEFF Research Database (Denmark)

    Perregaard, Helene; Aronson, Jeffrey K; Dalhoff, Kim

    2015-01-01

    AIMS: We have looked for medication errors involving the use of low-dose methotrexate, by extracting information from Danish sources other than traditional pharmacovigilance databases. We used the data to establish the relative frequencies of different types of errors. METHODS: We searched four...... errors, whereas knowledge-based errors more often resulted in near misses. CONCLUSIONS: The medication errors in this survey were most often action-based (50%) and knowledge-based (34%), suggesting that greater attention should be paid to education and surveillance of medical personnel who prescribe...

  11. Imagery of Errors in Typing

    Science.gov (United States)

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  12. Rotational error in path integration: encoding and execution errors in angle reproduction.

    Science.gov (United States)

    Chrastil, Elizabeth R; Warren, William H

    2017-06-01

    Path integration is fundamental to human navigation. When a navigator leaves home on a complex outbound path, they are able to keep track of their approximate position and orientation and return to their starting location on a direct homebound path. However, there are several sources of error during path integration. Previous research has focused almost exclusively on encoding error-the error in registering the outbound path in memory. Here, we also consider execution error-the error in the response, such as turning and walking a homebound trajectory. In two experiments conducted in ambulatory virtual environments, we examined the contribution of execution error to the rotational component of path integration using angle reproduction tasks. In the reproduction tasks, participants rotated once and then rotated again to face the original direction, either reproducing the initial turn or turning through the supplementary angle. One outstanding difficulty in disentangling encoding and execution error during a typical angle reproduction task is that as the encoding angle increases, so does the required response angle. In Experiment 1, we dissociated these two variables by asking participants to report each encoding angle using two different responses: by turning to walk on a path parallel to the initial facing direction in the same (reproduction) or opposite (supplementary angle) direction. In Experiment 2, participants reported the encoding angle by turning both rightward and leftward onto a path parallel to the initial facing direction, over a larger range of angles. The results suggest that execution error, not encoding error, is the predominant source of error in angular path integration. These findings also imply that the path integrator uses an intrinsic (action-scaled) rather than an extrinsic (objective) metric.

  13. Correction of refractive errors

    Directory of Open Access Journals (Sweden)

    Vladimir Pfeifer

    2005-10-01

    Full Text Available Background: Spectacles and contact lenses are the most frequently used, the safest and the cheapest way to correct refractive errors. The development of keratorefractive surgery has brought new opportunities for correction of refractive errors in patients who have the need to be less dependent of spectacles or contact lenses. Until recently, RK was the most commonly performed refractive procedure for nearsighted patients.Conclusions: The introduction of excimer laser in refractive surgery has given the new opportunities of remodelling the cornea. The laser energy can be delivered on the stromal surface like in PRK or deeper on the corneal stroma by means of lamellar surgery. In LASIK flap is created with microkeratome in LASEK with ethanol and in epi-LASIK the ultra thin flap is created mechanically.

  14. Error-Free Software

    Science.gov (United States)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  15. Minimum Tracking Error Volatility

    OpenAIRE

    Luca RICCETTI

    2010-01-01

    Investors assign part of their funds to asset managers that are given the task of beating a benchmark. The risk management department usually imposes a maximum value of the tracking error volatility (TEV) in order to keep the risk of the portfolio near to that of the selected benchmark. However, risk management does not establish a rule on TEV which enables us to understand whether the asset manager is really active or not and, in practice, asset managers sometimes follow passively the corres...

  16. Error-correction coding

    Science.gov (United States)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  17. Satellite Photometric Error Determination

    Science.gov (United States)

    2015-10-18

    Satellite Photometric Error Determination Tamara E. Payne, Philip J. Castro, Stephen A. Gregory Applied Optimization 714 East Monument Ave, Suite...advocate the adoption of new techniques based on in-frame photometric calibrations enabled by newly available all-sky star catalogs that contain highly...filter systems will likely be supplanted by the Sloan based filter systems. The Johnson photometric system is a set of filters in the optical

  18. Video Error Correction Using Steganography

    Science.gov (United States)

    Robie, David L.; Mersereau, Russell M.

    2002-12-01

    The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  19. Video Error Correction Using Steganography

    Directory of Open Access Journals (Sweden)

    Robie David L

    2002-01-01

    Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  20. Medication error detection in two major teaching hospitals: What are the types of errors?

    Directory of Open Access Journals (Sweden)

    Fatemeh Saghafi

    2014-01-01

    Full Text Available Background: Increasing number of reports on medication errors and relevant subsequent damages, especially in medical centers has become a growing concern for patient safety in recent decades. Patient safety and in particular, medication safety is a major concern and challenge for health care professionals around the world. Our prospective study was designed to detect prescribing, transcribing, dispensing, and administering medication errors in two major university hospitals. Materials and Methods: After choosing 20 similar hospital wards in two large teaching hospitals in the city of Isfahan, Iran, the sequence was randomly selected. Diagrams for drug distribution were drawn by the help of pharmacy directors. Direct observation technique was chosen as the method for detecting the errors. A total of 50 doses were studied in each ward to detect prescribing, transcribing and administering errors in each ward. The dispensing error was studied on 1000 doses dispensed in each hospital pharmacy. Results: A total of 8162 number of doses of medications were studied during the four stages, of which 8000 were complete data to be analyzed. 73% of prescribing orders were incomplete and did not have all six parameters (name, dosage form, dose and measuring unit, administration route, and intervals of administration. We found 15% transcribing errors. One-third of administration of medications on average was erroneous in both hospitals. Dispensing errors ranged between 1.4% and 2.2%. Conclusion: Although prescribing and administrating compromise most of the medication errors, improvements are needed in all four stages with regard to medication errors. Clear guidelines must be written and executed in both hospitals to reduce the incidence of medication errors.

  1. Terrestrial neutron-induced soft errors in advanced memory devices

    CERN Document Server

    Nakamura, Takashi; Ibe, Eishi; Yahagi, Yasuo; Kameyama, Hideaki

    2008-01-01

    Terrestrial neutron-induced soft errors in semiconductor memory devices are currently a major concern in reliability issues. Understanding the mechanism and quantifying soft-error rates are primarily crucial for the design and quality assurance of semiconductor memory devices. This book covers the relevant up-to-date topics in terrestrial neutron-induced soft errors, and aims to provide succinct knowledge on neutron-induced soft errors to the readers by presenting several valuable and unique features. Sample Chapter(s). Chapter 1: Introduction (238 KB). Table A.30 mentioned in Appendix A.6 on

  2. SU-F-T-310: Does a Head-Mounted Ionization Chamber Detect IMRT Errors?

    International Nuclear Information System (INIS)

    Wegener, S; Herzog, B; Sauer, O

    2016-01-01

    Purpose: The conventional plan verification strategy is delivering a plan to a QA-phantom before the first treatment. Monitoring each fraction of the patient treatment in real-time would improve patient safety. We evaluated how well a new detector, the IQM (iRT Systems, Germany), is capable of detecting errors we induced into IMRT plans of three different treatment regions. Results were compared to an established phantom. Methods: Clinical plans of a brain, prostate and head-and-neck patient were modified in the Pinnacle planning system, such that they resulted in either several percent lower prescribed doses to the target volume or several percent higher doses to relevant organs at risk. Unaltered plans were measured on three days, modified plans once, each with the IQM at an Elekta Synergy with an Agility MLC. All plans were also measured with the ArcCHECK with the cavity plug and a PTW semiflex 31010 ionization chamber inserted. Measurements were evaluated with SNC patient software. Results: Repeated IQM measurements of the original plans were reproducible, such that a 1% deviation from the mean as warning and 3% as action level as suggested by the manufacturer seemed reasonable. The IQM detected most of the simulated errors including wrong energy, a faulty leaf, wrong trial exported and a 2 mm shift of one leaf bank. Detection limits were reached for two plans - a 2 mm field position error and a leaf bank offset combined with an MU change. ArcCHECK evaluation according to our current standards also left undetected errors. Ionization chamber evaluation alone would leave most errors undetected. Conclusion: The IQM detected most errors and performed as well as currently established phantoms with the advantage that it can be used throughout the whole treatment. Drawback is that it does not indicate the source of the error.

  3. SU-F-T-310: Does a Head-Mounted Ionization Chamber Detect IMRT Errors?

    Energy Technology Data Exchange (ETDEWEB)

    Wegener, S; Herzog, B; Sauer, O [University of Wuerzburg, Wuerzburg (Germany)

    2016-06-15

    Purpose: The conventional plan verification strategy is delivering a plan to a QA-phantom before the first treatment. Monitoring each fraction of the patient treatment in real-time would improve patient safety. We evaluated how well a new detector, the IQM (iRT Systems, Germany), is capable of detecting errors we induced into IMRT plans of three different treatment regions. Results were compared to an established phantom. Methods: Clinical plans of a brain, prostate and head-and-neck patient were modified in the Pinnacle planning system, such that they resulted in either several percent lower prescribed doses to the target volume or several percent higher doses to relevant organs at risk. Unaltered plans were measured on three days, modified plans once, each with the IQM at an Elekta Synergy with an Agility MLC. All plans were also measured with the ArcCHECK with the cavity plug and a PTW semiflex 31010 ionization chamber inserted. Measurements were evaluated with SNC patient software. Results: Repeated IQM measurements of the original plans were reproducible, such that a 1% deviation from the mean as warning and 3% as action level as suggested by the manufacturer seemed reasonable. The IQM detected most of the simulated errors including wrong energy, a faulty leaf, wrong trial exported and a 2 mm shift of one leaf bank. Detection limits were reached for two plans - a 2 mm field position error and a leaf bank offset combined with an MU change. ArcCHECK evaluation according to our current standards also left undetected errors. Ionization chamber evaluation alone would leave most errors undetected. Conclusion: The IQM detected most errors and performed as well as currently established phantoms with the advantage that it can be used throughout the whole treatment. Drawback is that it does not indicate the source of the error.

  4. Refractive error assessment: influence of different optical elements and current limits of biometric techniques.

    Science.gov (United States)

    Ribeiro, Filomena; Castanheira-Dinis, Antonio; Dias, Joao Mendanha

    2013-03-01

    To identify and quantify sources of error on refractive assessment using exact ray tracing. The Liou-Brennan eye model was used as a starting point and its parameters were varied individually within a physiological range. The contribution of each parameter to refractive error was assessed using linear regression curve fits and Gaussian error propagation analysis. A MonteCarlo analysis quantified the limits of refractive assessment given by current biometric measurements. Vitreous and aqueous refractive indices are the elements that influence refractive error the most, with a 1% change of each parameter contributing to a refractive error variation of +1.60 and -1.30 diopters (D), respectively. In the phakic eye, axial length measurements taken by ultrasound (vitreous chamber depth, lens thickness, and anterior chamber depth [ACD]) were the most sensitive to biometric errors, with a contribution to the refractive error of 62.7%, 14.2%, and 10.7%, respectively. In the pseudophakic eye, vitreous chamber depth showed the highest contribution at 53.7%, followed by postoperative ACD at 35.7%. When optic measurements were considered, postoperative ACD was the most important contributor, followed by anterior corneal surface and its asphericity. A MonteCarlo simulation showed that current limits of refractive assessment are 0.26 and 0.28 D for the phakic and pseudophakic eye, respectively. The most relevant optical elements either do not have available measurement instruments or the existing instruments still need to improve their accuracy. Ray tracing can be used as an optical assessment technique, and may be the correct path for future personalized refractive assessment. Copyright 2013, SLACK Incorporated.

  5. Putting into practice error management theory: Unlearning and learning to manage action errors in construction.

    Science.gov (United States)

    Love, Peter E D; Smith, Jim; Teo, Pauline

    2018-05-01

    Error management theory is drawn upon to examine how a project-based organization, which took the form of a program alliance, was able to change its established error prevention mindset to one that enacted a learning mindfulness that provided an avenue to curtail its action errors. The program alliance was required to unlearn its existing routines and beliefs to accommodate the practices required to embrace error management. As a result of establishing an error management culture the program alliance was able to create a collective mindfulness that nurtured learning and supported innovation. The findings provide a much-needed context to demonstrate the relevance of error management theory to effectively address rework and safety problems in construction projects. The robust theoretical underpinning that is grounded in practice and presented in this paper provides a mechanism to engender learning from errors, which can be utilized by construction organizations to improve the productivity and performance of their projects. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Estimating the relevance of world disturbances to explain savings, interference and long-term motor adaptation effects.

    Directory of Open Access Journals (Sweden)

    Max Berniker

    2011-10-01

    Full Text Available Recent studies suggest that motor adaptation is the result of multiple, perhaps linear processes each with distinct time scales. While these models are consistent with some motor phenomena, they can neither explain the relatively fast re-adaptation after a long washout period, nor savings on a subsequent day. Here we examined if these effects can be explained if we assume that the CNS stores and retrieves movement parameters based on their possible relevance. We formalize this idea with a model that infers not only the sources of potential motor errors, but also their relevance to the current motor circumstances. In our model adaptation is the process of re-estimating parameters that represent the body and the world. The likelihood of a world parameter being relevant is then based on the mismatch between an observed movement and that predicted when not compensating for the estimated world disturbance. As such, adapting to large motor errors in a laboratory setting should alert subjects that disturbances are being imposed on them, even after motor performance has returned to baseline. Estimates of this external disturbance should be relevant both now and in future laboratory settings. Estimated properties of our bodies on the other hand should always be relevant. Our model demonstrates savings, interference, spontaneous rebound and differences between adaptation to sudden and gradual disturbances. We suggest that many issues concerning savings and interference can be understood when adaptation is conditioned on the relevance of parameters.

  7. Error-related brain activity and error awareness in an error classification paradigm.

    Science.gov (United States)

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Generalizing human error rates: A taxonomic approach

    International Nuclear Information System (INIS)

    Buffardi, L.; Fleishman, E.; Allen, J.

    1989-01-01

    It is well established that human error plays a major role in malfunctioning of complex, technological systems and in accidents associated with their operation. Estimates of the rate of human error in the nuclear industry range from 20-65% of all system failures. In response to this, the Nuclear Regulatory Commission has developed a variety of techniques for estimating human error probabilities for nuclear power plant personnel. Most of these techniques require the specification of the range of human error probabilities for various tasks. Unfortunately, very little objective performance data on error probabilities exist for nuclear environments. Thus, when human reliability estimates are required, for example in computer simulation modeling of system reliability, only subjective estimates (usually based on experts' best guesses) can be provided. The objective of the current research is to provide guidelines for the selection of human error probabilities based on actual performance data taken in other complex environments and applying them to nuclear settings. A key feature of this research is the application of a comprehensive taxonomic approach to nuclear and non-nuclear tasks to evaluate their similarities and differences, thus providing a basis for generalizing human error estimates across tasks. In recent years significant developments have occurred in classifying and describing tasks. Initial goals of the current research are to: (1) identify alternative taxonomic schemes that can be applied to tasks, and (2) describe nuclear tasks in terms of these schemes. Three standardized taxonomic schemes (Ability Requirements Approach, Generalized Information-Processing Approach, Task Characteristics Approach) are identified, modified, and evaluated for their suitability in comparing nuclear and non-nuclear power plant tasks. An agenda for future research and its relevance to nuclear power plant safety is also discussed

  9. AN ANALYSIS OF ACEHNESE EFL STUDENTS’ GRAMMATICAL ERRORS IN WRITING RECOUNT TEXTS

    Directory of Open Access Journals (Sweden)

    Qudwatin Nisak M. Isa

    2017-11-01

    Full Text Available This study aims at finding empirical evidence of the most common types of grammatical errors and sources of errors in recount texts written by the first-year students of SMAS Babul Maghfirah, Aceh Besar. The subject of the study was a collection of students’ personal writing documents of recount texts about their lives experience. The students’ recount texts were analyzed by referring to Betty S. Azar classification and Richard’s theory on sources of errors. The findings showed that the total number of error is 436. Two frequent types of grammatical errors were Verb Tense and Word Choice. The major sources of error were Intralingual Error, Interference Error and Developmental Error respectively. Furthermore, the findings suggest that it is necessary for EFL teachers to apply appropriate techniques and strategies in teaching recount texts, which focus on past tense and language features of the text in order to reduce the possible errors to be made by the students.

  10. Culturally Relevant Cyberbullying Prevention

    OpenAIRE

    Phillips, Gregory John

    2017-01-01

    In this action research study, I, along with a student intervention committee of 14 members, developed a cyberbullying intervention for a large urban high school on the west coast. This high school contained a predominantly African American student population. I aimed to discover culturally relevant cyberbullying prevention strategies for African American students. The intervention committee selected video safety messages featuring African American actors as the most culturally relevant cyber...

  11. Errors and mistakes in breast ultrasound diagnostics

    Directory of Open Access Journals (Sweden)

    Wiesław Jakubowski

    2012-09-01

    Full Text Available Sonomammography is often the first additional examination performed in the diagnostics of breast diseases. The development of ultrasound imaging techniques, particularly the introduction of high frequency transducers, matrix transducers, harmonic imaging and finally, elastography, influenced the improvement of breast disease diagnostics. Neverthe‑ less, as in each imaging method, there are errors and mistakes resulting from the techni‑ cal limitations of the method, breast anatomy (fibrous remodeling, insufficient sensitivity and, in particular, specificity. Errors in breast ultrasound diagnostics can be divided into impossible to be avoided and potentially possible to be reduced. In this article the most frequently made errors in ultrasound have been presented, including the ones caused by the presence of artifacts resulting from volumetric averaging in the near and far field, artifacts in cysts or in dilated lactiferous ducts (reverberations, comet tail artifacts, lateral beam artifacts, improper setting of general enhancement or time gain curve or range. Errors dependent on the examiner, resulting in the wrong BIRADS‑usg classification, are divided into negative and positive errors. The sources of these errors have been listed. The methods of minimization of the number of errors made have been discussed, includ‑ ing the ones related to the appropriate examination technique, taking into account data from case history and the use of the greatest possible number of additional options such as: harmonic imaging, color and power Doppler and elastography. In the article examples of errors resulting from the technical conditions of the method have been presented, and those dependent on the examiner which are related to the great diversity and variation of ultrasound images of pathological breast lesions.

  12. Diagnostic errors in pediatric radiology

    International Nuclear Information System (INIS)

    Taylor, George A.; Voss, Stephan D.; Melvin, Patrice R.; Graham, Dionne A.

    2011-01-01

    Little information is known about the frequency, types and causes of diagnostic errors in imaging children. Our goals were to describe the patterns and potential etiologies of diagnostic error in our subspecialty. We reviewed 265 cases with clinically significant diagnostic errors identified during a 10-year period. Errors were defined as a diagnosis that was delayed, wrong or missed; they were classified as perceptual, cognitive, system-related or unavoidable; and they were evaluated by imaging modality and level of training of the physician involved. We identified 484 specific errors in the 265 cases reviewed (mean:1.8 errors/case). Most discrepancies involved staff (45.5%). Two hundred fifty-eight individual cognitive errors were identified in 151 cases (mean = 1.7 errors/case). Of these, 83 cases (55%) had additional perceptual or system-related errors. One hundred sixty-five perceptual errors were identified in 165 cases. Of these, 68 cases (41%) also had cognitive or system-related errors. Fifty-four system-related errors were identified in 46 cases (mean = 1.2 errors/case) of which all were multi-factorial. Seven cases were unavoidable. Our study defines a taxonomy of diagnostic errors in a large academic pediatric radiology practice and suggests that most are multi-factorial in etiology. Further study is needed to define effective strategies for improvement. (orig.)

  13. Minimum Error Entropy Classification

    CERN Document Server

    Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A

    2013-01-01

    This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.

  14. NDE errors and their propagation in sizing and growth estimates

    International Nuclear Information System (INIS)

    Horn, D.; Obrutsky, L.; Lakhan, R.

    2009-01-01

    The accuracy attributed to eddy current flaw sizing determines the amount of conservativism required in setting tube-plugging limits. Several sources of error contribute to the uncertainty of the measurements, and the way in which these errors propagate and interact affects the overall accuracy of the flaw size and flaw growth estimates. An example of this calculation is the determination of an upper limit on flaw growth over one operating period, based on the difference between two measurements. Signal-to-signal comparison involves a variety of human, instrumental, and environmental error sources; of these, some propagate additively and some multiplicatively. In a difference calculation, specific errors in the first measurement may be correlated with the corresponding errors in the second; others may be independent. Each of the error sources needs to be identified and quantified individually, as does its distribution in the field data. A mathematical framework for the propagation of the errors can then be used to assess the sensitivity of the overall uncertainty to each individual error component. This paper quantifies error sources affecting eddy current sizing estimates and presents analytical expressions developed for their effect on depth estimates. A simple case study is used to model the analysis process. For each error source, the distribution of the field data was assessed and propagated through the analytical expressions. While the sizing error obtained was consistent with earlier estimates and with deviations from ultrasonic depth measurements, the error on growth was calculated as significantly smaller than that obtained assuming uncorrelated errors. An interesting result of the sensitivity analysis in the present case study is the quantification of the error reduction available from post-measurement compensation of magnetite effects. With the absolute and difference error equations, variance-covariance matrices, and partial derivatives developed in

  15. The concept of error and malpractice in radiology.

    Science.gov (United States)

    Pinto, Antonio; Brunese, Luca; Pinto, Fabio; Reali, Riccardo; Daniele, Stefania; Romano, Luigia

    2012-08-01

    Since the early 1970s, physicians have been subjected to an increasing number of medical malpractice claims. Radiology is one of the specialties most liable to claims of medical negligence. The etiology of radiological error is multifactorial. Errors fall into recurrent patterns. Errors arise from poor technique, failures of perception, lack of knowledge, and misjudgments. Every radiologist should understand the sources of error in diagnostic radiology as well as the elements of negligence that form the basis of malpractice litigation. Errors are an inevitable part of human life, and every health professional has made mistakes. To improve patient safety and reduce the risk from harm, we must accept that some errors are inevitable during the delivery of health care. We must play a cultural change in medicine, wherein errors are actively sought, openly discussed, and aggressively addressed. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. Learning from errors in radiology to improve patient safety.

    Science.gov (United States)

    Saeed, Shaista Afzal; Masroor, Imrana; Shafqat, Gulnaz

    2013-10-01

    To determine the views and practices of trainees and consultant radiologists about error reporting. Cross-sectional survey. Radiology trainees and consultant radiologists in four tertiary care hospitals in Karachi approached in the second quarter of 2011. Participants were enquired as to their grade, sub-specialty interest, whether they kept a record/log of their errors (defined as a mistake that has management implications for the patient), number of errors they made in the last 12 months and the predominant type of error. They were also asked about the details of their department error meetings. All duly completed questionnaires were included in the study while the ones with incomplete information were excluded. A total of 100 radiologists participated in the survey. Of them, 34 were consultants and 66 were trainees. They had a wide range of sub-specialty interest like CT, Ultrasound, etc. Out of the 100 responders, 49 kept a personal record/log of their errors. In response to the recall of approximate errors they made in the last 12 months, 73 (73%) of participants recorded a varied response with 1 - 5 errors mentioned by majority i.e. 47 (64.5%). Most of the radiologists (97%) claimed receiving information about their errors through multiple sources like morbidity/mortality meetings, patients' follow-up, through colleagues and consultants. Perceptual error 66 (66%) were the predominant error type reported. Regular occurrence of error meetings and attending three or more error meetings in the last 12 months was reported by 35% participants. Majority among these described the atmosphere of these error meetings as informative and comfortable (n = 22, 62.8%). It is of utmost importance to develop a culture of learning from mistakes by conducting error meetings and improving the process of recording and addressing errors to enhance patient safety.

  17. Is the Internet a useful and relevant source for health and health care information retrieval for German cardiothoracic patients? First results from a prospective survey among 255 Patients at a German cardiothoracic surgical clinic

    Directory of Open Access Journals (Sweden)

    Diez Claudius

    2006-10-01

    Full Text Available Abstract Background It is not clear how prevalent Internet use among cardiopathic patients in Germany is and what impact it has on the health care utilisation. We measured the extent of Internet use among cardiopathic patients and examined the effects that Internet use has on users' knowledge about their cardiac disease, health care matters and their use of the health care system. Methods We conducted a prospective survey among 255 cardiopathic patients at a German university hospital. Results Forty seven respondents (18 % used the internet and 8,8 % (n = 23 went online more than 20 hours per month. The most frequent reason for not using the internet was disinterest (52,3 %. Fourteen patients (5,4 % searched for specific disease-related information and valued the retrieved information on an analogous scale (1 = not relevant, 5 = very relevant on median with 4,0. Internet use is age and education dependent. Only 36 (14,1 % respondents found the internet useful, whereas the vast majority would not use it. Electronic scheduling for ambulatory visits or postoperative telemedical monitoring were rather disapproved. Conclusion We conclude that Internet use is infrequent among our study population and the search for relevant health and disease related information is not well established.

  18. Understanding error generation in fused deposition modeling

    International Nuclear Information System (INIS)

    Bochmann, Lennart; Transchel, Robert; Wegener, Konrad; Bayley, Cindy; Helu, Moneer; Dornfeld, David

    2015-01-01

    Additive manufacturing offers completely new possibilities for the manufacturing of parts. The advantages of flexibility and convenience of additive manufacturing have had a significant impact on many industries, and optimizing part quality is crucial for expanding its utilization. This research aims to determine the sources of imprecision in fused deposition modeling (FDM). Process errors in terms of surface quality, accuracy and precision are identified and quantified, and an error-budget approach is used to characterize errors of the machine tool. It was determined that accuracy and precision in the y direction (0.08–0.30 mm) are generally greater than in the x direction (0.12–0.62 mm) and the z direction (0.21–0.57 mm). Furthermore, accuracy and precision tend to decrease at increasing axis positions. The results of this work can be used to identify possible process improvements in the design and control of FDM technology. (paper)

  19. Understanding error generation in fused deposition modeling

    Science.gov (United States)

    Bochmann, Lennart; Bayley, Cindy; Helu, Moneer; Transchel, Robert; Wegener, Konrad; Dornfeld, David

    2015-03-01

    Additive manufacturing offers completely new possibilities for the manufacturing of parts. The advantages of flexibility and convenience of additive manufacturing have had a significant impact on many industries, and optimizing part quality is crucial for expanding its utilization. This research aims to determine the sources of imprecision in fused deposition modeling (FDM). Process errors in terms of surface quality, accuracy and precision are identified and quantified, and an error-budget approach is used to characterize errors of the machine tool. It was determined that accuracy and precision in the y direction (0.08-0.30 mm) are generally greater than in the x direction (0.12-0.62 mm) and the z direction (0.21-0.57 mm). Furthermore, accuracy and precision tend to decrease at increasing axis positions. The results of this work can be used to identify possible process improvements in the design and control of FDM technology.

  20. Angular discretization errors in transport theory

    International Nuclear Information System (INIS)

    Nelson, P.; Yu, F.

    1992-01-01

    Elements of the information-based complexity theory are computed for several types of information and associated algorithms for angular approximations in the setting of a on-dimensional model problem. For point-evaluation information, the local and global radii of information are computed, a (trivial) optimal algorithm is determined, and the local and global error of a discrete ordinates algorithm are shown to be infinite. For average cone-integral information, the local and global radii of information are computed, the local and global error tends to zero as the underlying partition is indefinitely refined. A central algorithm for such information and an optimal partition (of given cardinality) are described. It is further shown that the analytic first-collision source method has zero error (for the purely absorbing model problem). Implications of the restricted problem domains suitable for the various types of information are discussed

  1. Reduction of weighing errors caused by tritium decay heating

    International Nuclear Information System (INIS)

    Shaw, J.F.

    1978-01-01

    The deuterium-tritium source gas mixture for laser targets is formulated by weight. Experiments show that the maximum weighing error caused by tritium decay heating is 0.2% for a 104-cm 3 mix vessel. Air cooling the vessel reduces the weighing error by 90%

  2. Systematic Errors in Dimensional X-ray Computed Tomography

    DEFF Research Database (Denmark)

    that it is possible to compensate them. In dimensional X-ray computed tomography (CT), many physical quantities influence the final result. However, it is important to know which factors in CT measurements potentially lead to systematic errors. In this talk, typical error sources in dimensional X-ray CT are discussed...

  3. Use of WIMS-E lattice code for prediction of the transuranic source term for spent fuel dose estimation

    International Nuclear Information System (INIS)

    Schwinkendorf, K.N.

    1996-01-01

    A recent source term analysis has shown a discrepancy between ORIGEN2 transuranic isotopic production estimates and those produced with the WIMS-E lattice physics code. Excellent agreement between relevant experimental measurements and WIMS-E was shown, thus exposing an error in the cross section library used by ORIGEN2

  4. Standard Errors for Matrix Correlations.

    Science.gov (United States)

    Ogasawara, Haruhiko

    1999-01-01

    Derives the asymptotic standard errors and intercorrelations for several matrix correlations assuming multivariate normality for manifest variables and derives the asymptotic standard errors of the matrix correlations for two factor-loading matrices. (SLD)

  5. Error forecasting schemes of error correction at receiver

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2007-08-01

    To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)

  6. Evaluating a medical error taxonomy.

    OpenAIRE

    Brixey, Juliana; Johnson, Todd R.; Zhang, Jiajie

    2002-01-01

    Healthcare has been slow in using human factors principles to reduce medical errors. The Center for Devices and Radiological Health (CDRH) recognizes that a lack of attention to human factors during product development may lead to errors that have the potential for patient injury, or even death. In response to the need for reducing medication errors, the National Coordinating Council for Medication Errors Reporting and Prevention (NCC MERP) released the NCC MERP taxonomy that provides a stand...

  7. The Limits to Relevance

    Science.gov (United States)

    Averill, M.; Briggle, A.

    2006-12-01

    Science policy and knowledge production lately have taken a pragmatic turn. Funding agencies increasingly are requiring scientists to explain the relevance of their work to society. This stems in part from mounting critiques of the "linear model" of knowledge production in which scientists operating according to their own interests or disciplinary standards are presumed to automatically produce knowledge that is of relevance outside of their narrow communities. Many contend that funded scientific research should be linked more directly to societal goals, which implies a shift in the kind of research that will be funded. While both authors support the concept of useful science, we question the exact meaning of "relevance" and the wisdom of allowing it to control research agendas. We hope to contribute to the conversation by thinking more critically about the meaning and limits of the term "relevance" and the trade-offs implicit in a narrow utilitarian approach. The paper will consider which interests tend to be privileged by an emphasis on relevance and address issues such as whose goals ought to be pursued and why, and who gets to decide. We will consider how relevance, narrowly construed, may actually limit the ultimate utility of scientific research. The paper also will reflect on the worthiness of research goals themselves and their relationship to a broader view of what it means to be human and to live in society. Just as there is more to being human than the pragmatic demands of daily life, there is more at issue with knowledge production than finding the most efficient ways to satisfy consumer preferences or fix near-term policy problems. We will conclude by calling for a balanced approach to funding research that addresses society's most pressing needs but also supports innovative research with less immediately apparent application.

  8. Relevant Subspace Clustering

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Assent, Ira; Günnemann, Stephan

    2009-01-01

    Subspace clustering aims at detecting clusters in any subspace projection of a high dimensional space. As the number of possible subspace projections is exponential in the number of dimensions, the result is often tremendously large. Recent approaches fail to reduce results to relevant subspace...... clusters. Their results are typically highly redundant, i.e. many clusters are detected multiple times in several projections. In this work, we propose a novel model for relevant subspace clustering (RESCU). We present a global optimization which detects the most interesting non-redundant subspace clusters...... achieves top clustering quality while competing approaches show greatly varying performance....

  9. An upper bound on the number of errors corrected by a convolutional code

    DEFF Research Database (Denmark)

    Justesen, Jørn

    2000-01-01

    The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length.......The number of errors that a convolutional codes can correct in a segment of the encoded sequence is upper bounded by the number of distinct syndrome sequences of the relevant length....

  10. Measurement Error Estimation for Capacitive Voltage Transformer by Insulation Parameters

    Directory of Open Access Journals (Sweden)

    Bin Chen

    2017-03-01

    Full Text Available Measurement errors of a capacitive voltage transformer (CVT are relevant to its equivalent parameters for which its capacitive divider contributes the most. In daily operation, dielectric aging, moisture, dielectric breakdown, etc., it will exert mixing effects on a capacitive divider’s insulation characteristics, leading to fluctuation in equivalent parameters which result in the measurement error. This paper proposes an equivalent circuit model to represent a CVT which incorporates insulation characteristics of a capacitive divider. After software simulation and laboratory experiments, the relationship between measurement errors and insulation parameters is obtained. It indicates that variation of insulation parameters in a CVT will cause a reasonable measurement error. From field tests and calculation, equivalent capacitance mainly affects magnitude error, while dielectric loss mainly affects phase error. As capacitance changes 0.2%, magnitude error can reach −0.2%. As dielectric loss factor changes 0.2%, phase error can reach 5′. An increase of equivalent capacitance and dielectric loss factor in the high-voltage capacitor will cause a positive real power measurement error. An increase of equivalent capacitance and dielectric loss factor in the low-voltage capacitor will cause a negative real power measurement error.

  11. Learning curves, taking instructions, and patient safety: using a theoretical domains framework in an interview study to investigate prescribing errors among trainee doctors

    Directory of Open Access Journals (Sweden)

    Duncan Eilidh M

    2012-09-01

    Full Text Available Abstract Background Prescribing errors are a major source of morbidity and mortality and represent a significant patient safety concern. Evidence suggests that trainee doctors are responsible for most prescribing errors. Understanding the factors that influence prescribing behavior may lead to effective interventions to reduce errors. Existing investigations of prescribing errors have been based on Human Error Theory but not on other relevant behavioral theories. The aim of this study was to apply a broad theory-based approach using the Theoretical Domains Framework (TDF to investigate prescribing in the hospital context among a sample of trainee doctors. Method Semistructured interviews, based on 12 theoretical domains, were conducted with 22 trainee doctors to explore views, opinions, and experiences of prescribing and prescribing errors. Content analysis was conducted, followed by applying relevance criteria and a novel stage of critical appraisal, to identify which theoretical domains could be targeted in interventions to improve prescribing. Results Seven theoretical domains met the criteria of relevance: “social professional role and identity,” “environmental context and resources,” “social influences,” “knowledge,” “skills,” “memory, attention, and decision making,” and “behavioral regulation.” From critical appraisal of the interview data, “beliefs about consequences” and “beliefs about capabilities” were also identified as potentially important domains. Interrelationships between domains were evident. Additionally, the data supported theoretical elaboration of the domain behavioral regulation. Conclusions In this investigation of hospital-based prescribing, participants’ attributions about causes of errors were used to identify domains that could be targeted in interventions to improve prescribing. In a departure from previous TDF practice, critical appraisal was used to identify additional domains

  12. Learning curves, taking instructions, and patient safety: using a theoretical domains framework in an interview study to investigate prescribing errors among trainee doctors.

    Science.gov (United States)

    Duncan, Eilidh M; Francis, Jill J; Johnston, Marie; Davey, Peter; Maxwell, Simon; McKay, Gerard A; McLay, James; Ross, Sarah; Ryan, Cristín; Webb, David J; Bond, Christine

    2012-09-11

    Prescribing errors are a major source of morbidity and mortality and represent a significant patient safety concern. Evidence suggests that trainee doctors are responsible for most prescribing errors. Understanding the factors that influence prescribing behavior may lead to effective interventions to reduce errors. Existing investigations of prescribing errors have been based on Human Error Theory but not on other relevant behavioral theories. The aim of this study was to apply a broad theory-based approach using the Theoretical Domains Framework (TDF) to investigate prescribing in the hospital context among a sample of trainee doctors. Semistructured interviews, based on 12 theoretical domains, were conducted with 22 trainee doctors to explore views, opinions, and experiences of prescribing and prescribing errors. Content analysis was conducted, followed by applying relevance criteria and a novel stage of critical appraisal, to identify which theoretical domains could be targeted in interventions to improve prescribing. Seven theoretical domains met the criteria of relevance: "social professional role and identity," "environmental context and resources," "social influences," "knowledge," "skills," "memory, attention, and decision making," and "behavioral regulation." From critical appraisal of the interview data, "beliefs about consequences" and "beliefs about capabilities" were also identified as potentially important domains. Interrelationships between domains were evident. Additionally, the data supported theoretical elaboration of the domain behavioral regulation. In this investigation of hospital-based prescribing, participants' attributions about causes of errors were used to identify domains that could be targeted in interventions to improve prescribing. In a departure from previous TDF practice, critical appraisal was used to identify additional domains that should also be targeted, despite participants' perceptions that they were not relevant to

  13. Error Patterns in Problem Solving.

    Science.gov (United States)

    Babbitt, Beatrice C.

    Although many common problem-solving errors within the realm of school mathematics have been previously identified, a compilation of such errors is not readily available within learning disabilities textbooks, mathematics education texts, or teacher's manuals for school mathematics texts. Using data on error frequencies drawn from both the Fourth…

  14. Engagement in Learning after Errors at Work: Enabling Conditions and Types of Engagement

    Science.gov (United States)

    Bauer, Johannes; Mulder, Regina H.

    2013-01-01

    This article addresses two research questions concerning nurses' engagement in social learning activities after errors at work. Firstly, we investigated how this engagement relates to nurses' interpretations of the error situation and perceptions of a safe team climate. The results indicate that the individual estimation of an error as relevant to…

  15. Is Information Still Relevant?

    Science.gov (United States)

    Ma, Lia

    2013-01-01

    Introduction: The term "information" in information science does not share the characteristics of those of a nomenclature: it does not bear a generally accepted definition and it does not serve as the bases and assumptions for research studies. As the data deluge has arrived, is the concept of information still relevant for information…

  16. Error analysis of satellite attitude determination using a vision-based approach

    Science.gov (United States)

    Carozza, Ludovico; Bevilacqua, Alessandro

    2013-09-01

    Improvements in communication and processing technologies have opened the doors to exploit on-board cameras to compute objects' spatial attitude using only the visual information from sequences of remote sensed images. The strategies and the algorithmic approach used to extract such information affect the estimation accuracy of the three-axis orientation of the object. This work presents a method for analyzing the most relevant error sources, including numerical ones, possible drift effects and their influence on the overall accuracy, referring to vision-based approaches. The method in particular focuses on the analysis of the image registration algorithm, carried out through on-purpose simulations. The overall accuracy has been assessed on a challenging case study, for which accuracy represents the fundamental requirement. In particular, attitude determination has been analyzed for small satellites, by comparing theoretical findings to metric results from simulations on realistic ground-truth data. Significant laboratory experiments, using a numerical control unit, have further confirmed the outcome. We believe that our analysis approach, as well as our findings in terms of error characterization, can be useful at proof-of-concept design and planning levels, since they emphasize the main sources of error for visual based approaches employed for satellite attitude estimation. Nevertheless, the approach we present is also of general interest for all the affine applicative domains which require an accurate estimation of three-dimensional orientation parameters (i.e., robotics, airborne stabilization).

  17. Error Mitigation for Short-Depth Quantum Circuits

    Science.gov (United States)

    Temme, Kristan; Bravyi, Sergey; Gambetta, Jay M.

    2017-11-01

    Two schemes are presented that mitigate the effect of errors and decoherence in short-depth quantum circuits. The size of the circuits for which these techniques can be applied is limited by the rate at which the errors in the computation are introduced. Near-term applications of early quantum devices, such as quantum simulations, rely on accurate estimates of expectation values to become relevant. Decoherence and gate errors lead to wrong estimates of the expectation values of observables used to evaluate the noisy circuit. The two schemes we discuss are deliberately simple and do not require additional qubit resources, so to be as practically relevant in current experiments as possible. The first method, extrapolation to the zero noise limit, subsequently cancels powers of the noise perturbations by an application of Richardson's deferred approach to the limit. The second method cancels errors by resampling randomized circuits according to a quasiprobability distribution.

  18. Controlling errors in unidosis carts

    Directory of Open Access Journals (Sweden)

    Inmaculada Díaz Fernández

    2010-01-01

    Full Text Available Objective: To identify errors in the unidosis system carts. Method: For two months, the Pharmacy Service controlled medication either returned or missing from the unidosis carts both in the pharmacy and in the wards. Results: Uncorrected unidosis carts show a 0.9% of medication errors (264 versus 0.6% (154 which appeared in unidosis carts previously revised. In carts not revised, the error is 70.83% and mainly caused when setting up unidosis carts. The rest are due to a lack of stock or unavailability (21.6%, errors in the transcription of medical orders (6.81% or that the boxes had not been emptied previously (0.76%. The errors found in the units correspond to errors in the transcription of the treatment (3.46%, non-receipt of the unidosis copy (23.14%, the patient did not take the medication (14.36%or was discharged without medication (12.77%, was not provided by nurses (14.09%, was withdrawn from the stocks of the unit (14.62%, and errors of the pharmacy service (17.56% . Conclusions: It is concluded the need to redress unidosis carts and a computerized prescription system to avoid errors in transcription.Discussion: A high percentage of medication errors is caused by human error. If unidosis carts are overlooked before sent to hospitalization units, the error diminishes to 0.3%.

  19. Basic considerations in predicting error probabilities in human task performance

    International Nuclear Information System (INIS)

    Fleishman, E.A.; Buffardi, L.C.; Allen, J.A.; Gaskins, R.C. III

    1990-04-01

    It is well established that human error plays a major role in the malfunctioning of complex systems. This report takes a broad look at the study of human error and addresses the conceptual, methodological, and measurement issues involved in defining and describing errors in complex systems. In addition, a review of existing sources of human reliability data and approaches to human performance data base development is presented. Alternative task taxonomies, which are promising for establishing the comparability on nuclear and non-nuclear tasks, are also identified. Based on such taxonomic schemes, various data base prototypes for generalizing human error rates across settings are proposed. 60 refs., 3 figs., 7 tabs

  20. Prioritising interventions against medication errors

    DEFF Research Database (Denmark)

    Lisby, Marianne; Pape-Larsen, Louise; Sørensen, Ann Lykkegaard

    errors are therefore needed. Development of definition: A definition of medication errors including an index of error types for each stage in the medication process was developed from existing terminology and through a modified Delphi-process in 2008. The Delphi panel consisted of 25 interdisciplinary......Abstract Authors: Lisby M, Larsen LP, Soerensen AL, Nielsen LP, Mainz J Title: Prioritising interventions against medication errors – the importance of a definition Objective: To develop and test a restricted definition of medication errors across health care settings in Denmark Methods: Medication...... errors constitute a major quality and safety problem in modern healthcare. However, far from all are clinically important. The prevalence of medication errors ranges from 2-75% indicating a global problem in defining and measuring these [1]. New cut-of levels focusing the clinical impact of medication...

  1. Social aspects of clinical errors.

    Science.gov (United States)

    Richman, Joel; Mason, Tom; Mason-Whitehead, Elizabeth; McIntosh, Annette; Mercer, Dave

    2009-08-01

    Clinical errors, whether committed by doctors, nurses or other professions allied to healthcare, remain a sensitive issue requiring open debate and policy formulation in order to reduce them. The literature suggests that the issues underpinning errors made by healthcare professionals involve concerns about patient safety, professional disclosure, apology, litigation, compensation, processes of recording and policy development to enhance quality service. Anecdotally, we are aware of narratives of minor errors, which may well have been covered up and remain officially undisclosed whilst the major errors resulting in damage and death to patients alarm both professionals and public with resultant litigation and compensation. This paper attempts to unravel some of these issues by highlighting the historical nature of clinical errors and drawing parallels to contemporary times by outlining the 'compensation culture'. We then provide an overview of what constitutes a clinical error and review the healthcare professional strategies for managing such errors.

  2. COMPARATIVE ERROR ANALYSIS IN ENGLISH WRITING BY FIRST, SECOND, AND THIRD YEAR STUDNETS OF ENGLISH DEPARTMENT OF FACULTY OF EDUCATION AT CHAMPASACK UNIVERSITY

    Directory of Open Access Journals (Sweden)

    Nokthavivanh Sychandone

    2016-08-01

    first year learners produced 229 errors or 40, 10% and third year learners made 79 errors or 13, 83%. There are similarity in errors types, five similar categories and five error cases, but there are three different error categories and eighteen error cases. The main error sources, learners had lack knowledge of English grammatical rule. The overgeneralization (265 errors or 46, 40% influences learners’ error, language transfer (199 errors or 34, 85% still interfere learners’ acquisition and simplification (107 errors or 18, 73% is one factor that effect learners’ errors.

  3. Review of U.S. Army Unmanned Aerial Systems Accident Reports: Analysis of Human Error Contributions

    Science.gov (United States)

    2018-03-20

    within report documents. The information presented was obtained through a request to use the U.S. Army Combat Readiness Center’s Risk Management ...controlled flight into terrain (13 accidents), fueling errors by improper techniques (7 accidents), and a variety of maintenance errors (10 accidents). The...and 9 of the 10 maintenance accidents. Table 4. Frequencies Based on Source of Human Error Human error source Presence Poor Planning

  4. Evaluation and Error Analysis for a Solar Thermal Receiver

    International Nuclear Information System (INIS)

    Pfander, M.

    2001-01-01

    In the following study a complete balance over the REFOS receiver module, mounted on the tower power plant CESA-1 at the Plataforma Solar de Almeria (PSA), is carried out. Additionally an error inspection of the various measurement techniques used in the REFOS project is made. Especially the flux measurement system Pro hermes that is used to determine the total entry power of the receiver module and known as a major error source is analysed in detail. Simulations and experiments on the particular instruments are used to determine and quantify possible error sources. After discovering the origin of the errors they are reduced and included in the error calculation. The ultimate result is presented as an overall efficiency of the receiver module in dependence on the flux density at the receiver modules entry plane and the receiver operating temperature. (Author) 26 refs

  5. Radon measurements-discussion of error estimates for selected methods

    International Nuclear Information System (INIS)

    Zhukovsky, Michael; Onischenko, Alexandra; Bastrikov, Vladislav

    2010-01-01

    The main sources of uncertainties for grab sampling, short-term (charcoal canisters) and long term (track detectors) measurements are: systematic bias of reference equipment; random Poisson and non-Poisson errors during calibration; random Poisson and non-Poisson errors during measurements. The origins of non-Poisson random errors during calibration are different for different kinds of instrumental measurements. The main sources of uncertainties for retrospective measurements conducted by surface traps techniques can be divided in two groups: errors of surface 210 Pb ( 210 Po) activity measurements and uncertainties of transfer from 210 Pb surface activity in glass objects to average radon concentration during this object exposure. It's shown that total measurement error of surface trap retrospective technique can be decreased to 35%.

  6. Evaluation and Error Analysis for a Solar thermal Receiver

    Energy Technology Data Exchange (ETDEWEB)

    Pfander, M.

    2001-07-01

    In the following study a complete balance over the REFOS receiver module, mounted on the tower power plant CESA-1 at the Plataforma Solar de Almeria (PSA), is carried out. Additionally an error inspection of the various measurement techniques used in the REFOS project is made. Especially the flux measurement system Prohermes that is used to determine the total entry power of the receiver module and known as a major error source is analysed in detail. Simulations and experiments on the particular instruments are used to determine and quantify possible error sources. After discovering the origin of the errors they are reduced and included in the error calculation. the ultimate result is presented as an overall efficiency of the receiver module in dependence on the flux density at the receiver module's entry plane and the receiver operating temperature. (Author) 26 refs.

  7. Clinical Relevance of Adipokines

    Directory of Open Access Journals (Sweden)

    Matthias Blüher

    2012-10-01

    Full Text Available The incidence of obesity has increased dramatically during recent decades. Obesity increases the risk for metabolic and cardiovascular diseases and may therefore contribute to premature death. With increasing fat mass, secretion of adipose tissue derived bioactive molecules (adipokines changes towards a pro-inflammatory, diabetogenic and atherogenic pattern. Adipokines are involved in the regulation of appetite and satiety, energy expenditure, activity, endothelial function, hemostasis, blood pressure, insulin sensitivity, energy metabolism in insulin sensitive tissues, adipogenesis, fat distribution and insulin secretion in pancreatic β-cells. Therefore, adipokines are clinically relevant as biomarkers for fat distribution, adipose tissue function, liver fat content, insulin sensitivity, chronic inflammation and have the potential for future pharmacological treatment strategies for obesity and its related diseases. This review focuses on the clinical relevance of selected adipokines as markers or predictors of obesity related diseases and as potential therapeutic tools or targets in metabolic and cardiovascular diseases.

  8. Errors in clinical laboratories or errors in laboratory medicine?

    Science.gov (United States)

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  9. Information Needs/Relevance

    OpenAIRE

    Wildemuth, Barbara M.

    2009-01-01

    A user's interaction with a DL is often initiated as the result of the user experiencing an information need of some kind. Aspects of that experience and how it might affect the user's interactions with the DL are discussed in this module. In addition, users continuously make decisions about and evaluations of the materials retrieved from a DL, relative to their information needs. Relevance judgments, and their relationship to the user's information needs, are discussed in this module. Draft

  10. Errors in abdominal computed tomography

    International Nuclear Information System (INIS)

    Stephens, S.; Marting, I.; Dixon, A.K.

    1989-01-01

    Sixty-nine patients are presented in whom a substantial error was made on the initial abdominal computed tomography report. Certain features of these errors have been analysed. In 30 (43.5%) a lesion was simply not recognised (error of observation); in 39 (56.5%) the wrong conclusions were drawn about the nature of normal or abnormal structures (error of interpretation). The 39 errors of interpretation were more complex; in 7 patients an abnormal structure was noted but interpreted as normal, whereas in four a normal structure was thought to represent a lesion. Other interpretive errors included those where the wrong cause for a lesion had been ascribed (24 patients), and those where the abnormality was substantially under-reported (4 patients). Various features of these errors are presented and discussed. Errors were made just as often in relation to small and large lesions. Consultants made as many errors as senior registrar radiologists. It is like that dual reporting is the best method of avoiding such errors and, indeed, this is widely practised in our unit. (Author). 9 refs.; 5 figs.; 1 tab

  11. Diagnostic Error in Correctional Mental Health: Prevalence, Causes, and Consequences.

    Science.gov (United States)

    Martin, Michael S; Hynes, Katie; Hatcher, Simon; Colman, Ian

    2016-04-01

    While they have important implications for inmates and resourcing of correctional institutions, diagnostic errors are rarely discussed in correctional mental health research. This review seeks to estimate the prevalence of diagnostic errors in prisons and jails and explores potential causes and consequences. Diagnostic errors are defined as discrepancies in an inmate's diagnostic status depending on who is responsible for conducting the assessment and/or the methods used. It is estimated that at least 10% to 15% of all inmates may be incorrectly classified in terms of the presence or absence of a mental illness. Inmate characteristics, relationships with staff, and cognitive errors stemming from the use of heuristics when faced with time constraints are discussed as possible sources of error. A policy example of screening for mental illness at intake to prison is used to illustrate when the risk of diagnostic error might be increased and to explore strategies to mitigate this risk. © The Author(s) 2016.

  12. Laboratory errors and patient safety.

    Science.gov (United States)

    Miligy, Dawlat A

    2015-01-01

    Laboratory data are extensively used in medical practice; consequently, laboratory errors have a tremendous impact on patient safety. Therefore, programs designed to identify and reduce laboratory errors, as well as, setting specific strategies are required to minimize these errors and improve patient safety. The purpose of this paper is to identify part of the commonly encountered laboratory errors throughout our practice in laboratory work, their hazards on patient health care and some measures and recommendations to minimize or to eliminate these errors. Recording the encountered laboratory errors during May 2008 and their statistical evaluation (using simple percent distribution) have been done in the department of laboratory of one of the private hospitals in Egypt. Errors have been classified according to the laboratory phases and according to their implication on patient health. Data obtained out of 1,600 testing procedure revealed that the total number of encountered errors is 14 tests (0.87 percent of total testing procedures). Most of the encountered errors lay in the pre- and post-analytic phases of testing cycle (representing 35.7 and 50 percent, respectively, of total errors). While the number of test errors encountered in the analytic phase represented only 14.3 percent of total errors. About 85.7 percent of total errors were of non-significant implication on patients health being detected before test reports have been submitted to the patients. On the other hand, the number of test errors that have been already submitted to patients and reach the physician represented 14.3 percent of total errors. Only 7.1 percent of the errors could have an impact on patient diagnosis. The findings of this study were concomitant with those published from the USA and other countries. This proves that laboratory problems are universal and need general standardization and bench marking measures. Original being the first data published from Arabic countries that

  13. An Empirical State Error Covariance Matrix Orbit Determination Example

    Science.gov (United States)

    Frisbee, Joseph H., Jr.

    2015-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance

  14. Error threshold ghosts in a simple hypercycle with error prone self-replication

    International Nuclear Information System (INIS)

    Sardanyes, Josep

    2008-01-01

    A delayed transition because of mutation processes is shown to happen in a simple hypercycle composed by two indistinguishable molecular species with error prone self-replication. The appearance of a ghost near the hypercycle error threshold causes a delay in the extinction and thus in the loss of information of the mutually catalytic replicators, in a kind of information memory. The extinction time, τ, scales near bifurcation threshold according to the universal square-root scaling law i.e. τ ∼ (Q hc - Q) -1/2 , typical of dynamical systems close to a saddle-node bifurcation. Here, Q hc represents the bifurcation point named hypercycle error threshold, involved in the change among the asymptotic stability phase and the so-called Random Replication State (RRS) of the hypercycle; and the parameter Q is the replication quality factor. The ghost involves a longer transient towards extinction once the saddle-node bifurcation has occurred, being extremely long near the bifurcation threshold. The role of this dynamical effect is expected to be relevant in fluctuating environments. Such a phenomenon should also be found in larger hypercycles when considering the hypercycle species in competition with their error tail. The implications of the ghost in the survival and evolution of error prone self-replicating molecules with hypercyclic organization are discussed

  15. Dopamine reward prediction error coding.

    Science.gov (United States)

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  16. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  17. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  18. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  19. Architecture design for soft errors

    CERN Document Server

    Mukherjee, Shubu

    2008-01-01

    This book provides a comprehensive description of the architetural techniques to tackle the soft error problem. It covers the new methodologies for quantitative analysis of soft errors as well as novel, cost-effective architectural techniques to mitigate them. To provide readers with a better grasp of the broader problem deffinition and solution space, this book also delves into the physics of soft errors and reviews current circuit and software mitigation techniques.

  20. Dopamine reward prediction error coding

    OpenAIRE

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards?an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less...

  1. Identifying Error in AUV Communication

    National Research Council Canada - National Science Library

    Coleman, Joseph; Merrill, Kaylani; O'Rourke, Michael; Rajala, Andrew G; Edwards, Dean B

    2006-01-01

    Mine Countermeasures (MCM) involving Autonomous Underwater Vehicles (AUVs) are especially susceptible to error, given the constraints on underwater acoustic communication and the inconstancy of the underwater communication channel...

  2. Human Errors in Decision Making

    OpenAIRE

    Mohamad, Shahriari; Aliandrina, Dessy; Feng, Yan

    2005-01-01

    The aim of this paper was to identify human errors in decision making process. The study was focused on a research question such as: what could be the human error as a potential of decision failure in evaluation of the alternatives in the process of decision making. Two case studies were selected from the literature and analyzed to find the human errors contribute to decision fail. Then the analysis of human errors was linked with mental models in evaluation of alternative step. The results o...

  3. Finding beam focus errors automatically

    International Nuclear Information System (INIS)

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.

    1987-01-01

    An automated method for finding beam focus errors using an optimization program called COMFORT-PLUS. The steps involved in finding the correction factors using COMFORT-PLUS has been used to find the beam focus errors for two damping rings at the SLAC Linear Collider. The program is to be used as an off-line program to analyze actual measured data for any SLC system. A limitation on the application of this procedure is found to be that it depends on the magnitude of the machine errors. Another is that the program is not totally automated since the user must decide a priori where to look for errors

  4. Heuristic errors in clinical reasoning.

    Science.gov (United States)

    Rylander, Melanie; Guerrasio, Jeannette

    2016-08-01

    Errors in clinical reasoning contribute to patient morbidity and mortality. The purpose of this study was to determine the types of heuristic errors made by third-year medical students and first-year residents. This study surveyed approximately 150 clinical educators inquiring about the types of heuristic errors they observed in third-year medical students and first-year residents. Anchoring and premature closure were the two most common errors observed amongst third-year medical students and first-year residents. There was no difference in the types of errors observed in the two groups. Errors in clinical reasoning contribute to patient morbidity and mortality Clinical educators perceived that both third-year medical students and first-year residents committed similar heuristic errors, implying that additional medical knowledge and clinical experience do not affect the types of heuristic errors made. Further work is needed to help identify methods that can be used to reduce heuristic errors early in a clinician's education. © 2015 John Wiley & Sons Ltd.

  5. A Hybrid Unequal Error Protection / Unequal Error Resilience ...

    African Journals Online (AJOL)

    The quality layers are then assigned an Unequal Error Resilience to synchronization loss by unequally allocating the number of headers available for synchronization to them. Following that Unequal Error Protection against channel noise is provided to the layers by the use of Rate Compatible Punctured Convolutional ...

  6. Error studies for SNS Linac. Part 1: Transverse errors

    International Nuclear Information System (INIS)

    Crandall, K.R.

    1998-01-01

    The SNS linac consist of a radio-frequency quadrupole (RFQ), a drift-tube linac (DTL), a coupled-cavity drift-tube linac (CCDTL) and a coupled-cavity linac (CCL). The RFQ and DTL are operated at 402.5 MHz; the CCDTL and CCL are operated at 805 MHz. Between the RFQ and DTL is a medium-energy beam-transport system (MEBT). This error study is concerned with the DTL, CCDTL and CCL, and each will be analyzed separately. In fact, the CCL is divided into two sections, and each of these will be analyzed separately. The types of errors considered here are those that affect the transverse characteristics of the beam. The errors that cause the beam center to be displaced from the linac axis are quad displacements and quad tilts. The errors that cause mismatches are quad gradient errors and quad rotations (roll)

  7. A general approach to error propagation

    International Nuclear Information System (INIS)

    Sanborn, J.B.

    1987-01-01

    A computational approach to error propagation is explained. It is shown that the application of the first-order Taylor theory to a fairly general expression representing an inventory or inventory-difference quantity leads naturally to a data structure that is useful for structuring error-propagation calculations. This data structure incorporates six types of data entities: (1) the objects in the material balance, (2) numerical parameters that describe these objects, (3) groups or sets of objects, (4) the terms which make up the material-balance equation, (5) the errors or sources of variance and (6) the functions or subroutines that represent Taylor partial derivatives. A simple algorithm based on this data structure can be defined using formulas that are sums of squares of sums. The data structures and algorithms described above have been implemented as computer software in FORTRAN for IBM PC-type machines. A free-form data-entry format allows users to separate data as they wish into separate files and enter data using a text editor. The program has been applied to the computation of limits of error for inventory differences (LEIDs) within the DOE complex. 1 ref., 3 figs

  8. Numerical study of the systematic error in Monte Carlo schemes for semiconductors

    Energy Technology Data Exchange (ETDEWEB)

    Muscato, Orazio [Univ. degli Studi di Catania (Italy). Dipt. di Matematica e Informatica; Di Stefano, Vincenza [Univ. degli Studi di Messina (Italy). Dipt. di Matematica; Wagner, Wolfgang [Weierstrass-Institut fuer Angewandte Analysis und Stochastik (WIAS) im Forschungsverbund Berlin e.V. (Germany)

    2008-07-01

    The paper studies the convergence behavior of Monte Carlo schemes for semiconductors. A detailed analysis of the systematic error with respect to numerical parameters is performed. Different sources of systematic error are pointed out and illustrated in a spatially one-dimensional test case. The error with respect to the number of simulation particles occurs during the calculation of the internal electric field. The time step error, which is related to the splitting of transport and electric field calculations, vanishes sufficiently fast. The error due to the approximation of the trajectories of particles depends on the ODE solver used in the algorithm. It is negligible compared to the other sources of time step error, when a second order Runge-Kutta solver is used. The error related to the approximate scattering mechanism is the most significant source of error with respect to the time step. (orig.)

  9. [Relevant public health enteropathogens].

    Science.gov (United States)

    Riveros, Maribel; Ochoa, Theresa J

    2015-01-01

    Diarrhea remains the third leading cause of death in children under five years, despite recent advances in the management and prevention of this disease. It is caused by multiple pathogens, however, the prevalence of each varies by age group, geographical area and the scenario where cases (community vs hospital) are recorded. The most relevant pathogens in public health are those associated with the highest burden of disease, severity, complications and mortality. In our country, norovirus, Campylobacter and diarrheagenic E. coli are the most prevalent pathogens at the community level in children. In this paper we review the local epidemiology and potential areas of development in five selected pathogens: rotavirus, norovirus, Shiga toxin-producing E. coli (STEC), Shigella and Salmonella. Of these, rotavirus is the most important in the pediatric population and the main agent responsible for child mortality from diarrhea. The introduction of rotavirus vaccination in Peru will have a significant impact on disease burden and mortality from diarrhea. However, surveillance studies are needed to determine the impact of vaccination and changes in the epidemiology of diarrhea in Peru following the introduction of new vaccines, as well as antibiotic resistance surveillance of clinical relevant bacteria.

  10. Error begat error: design error analysis and prevention in social infrastructure projects.

    Science.gov (United States)

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.

  11. A national physician survey of diagnostic error in paediatrics.

    Science.gov (United States)

    Perrem, Lucy M; Fanshawe, Thomas R; Sharif, Farhana; Plüddemann, Annette; O'Neill, Michael B

    2016-10-01

    This cross-sectional survey explored paediatric physician perspectives regarding diagnostic errors. All paediatric consultants and specialist registrars in Ireland were invited to participate in this anonymous online survey. The response rate for the study was 54 % (n = 127). Respondents had a median of 9-year clinical experience (interquartile range (IQR) 4-20 years). A diagnostic error was reported at least monthly by 19 (15.0 %) respondents. Consultants reported significantly less diagnostic errors compared to trainees (p value = 0.01). Cognitive error was the top-ranked contributing factor to diagnostic error, with incomplete history and examination considered to be the principal cognitive error. Seeking a second opinion and close follow-up of patients to ensure that the diagnosis is correct were the highest-ranked, clinician-based solutions to diagnostic error. Inadequate staffing levels and excessive workload were the most highly ranked system-related and situational factors. Increased access to and availability of consultants and experts was the most highly ranked system-based solution to diagnostic error. We found a low level of self-perceived diagnostic error in an experienced group of paediatricians, at variance with the literature and warranting further clarification. The results identify perceptions on the major cognitive, system-related and situational factors contributing to diagnostic error and also key preventative strategies. • Diagnostic errors are an important source of preventable patient harm and have an estimated incidence of 10-15 %. • They are multifactorial in origin and include cognitive, system-related and situational factors. What is New: • We identified a low rate of self-perceived diagnostic error in contrast to the existing literature. • Incomplete history and examination, inadequate staffing levels and excessive workload are cited as the principal contributing factors to diagnostic error in this study.

  12. Quantifying the Contributions of Environmental Parameters to Ceres Surface Net Radiation Error in China

    Science.gov (United States)

    Pan, X.; Yang, Y.; Liu, Y.; Fan, X.; Shan, L.; Zhang, X.

    2018-04-01

    Error source analyses are critical for the satellite-retrieved surface net radiation (Rn) products. In this study, we evaluate the Rn error sources in the Clouds and the Earth's Radiant Energy System (CERES) project at 43 sites from July in 2007 to December in 2007 in China. The results show that cloud fraction (CF), land surface temperature (LST), atmospheric temperature (AT) and algorithm error dominate the Rn error, with error contributions of -20, 15, 10 and 10 W/m2 (net shortwave (NSW)/longwave (NLW) radiation), respectively. For NSW, the dominant error source is algorithm error (more than 10 W/m2), particularly in spring and summer with abundant cloud. For NLW, due to the high sensitivity of algorithm and large LST/CF error, LST and CF are the largest error sources, especially in northern China. The AT influences the NLW error large in southern China because of the large AT error in there. The total precipitable water has weak influence on Rn error even with the high sensitivity of algorithm. In order to improve Rn quality, CF and LST (AT) error in northern (southern) China should be decreased.

  13. Estimation of error fields from ferromagnetic parts in ITER

    Energy Technology Data Exchange (ETDEWEB)

    Oliva, A. Bonito [Fusion for Energy (Spain); Chiariello, A.G.; Formisano, A.; Martone, R. [Ass. EURATOM/ENEA/CREATE, Dip. di Ing. Industriale e dell’Informazione, Seconda Università di Napoli, Via Roma 29, I-81031 Napoli (Italy); Portone, A., E-mail: alfredo.portone@f4e.europa.eu [Fusion for Energy (Spain); Testoni, P. [Fusion for Energy (Spain)

    2013-10-15

    Highlights: ► The paper deals with error fields generated in ITER by magnetic masses. ► Magnetization state is computed from simplified FEM models. ► Closed form expressions adopted for the flux density of magnetized parts are given. ► Such expressions allow to simplify the estimation of the effect of iron pieces (or lack of) on error field. -- Abstract: Error fields in tokamaks are small departures from the exact axisymmetry of the ideal magnetic field configuration. Their reduction below a threshold value by the error field correction coils is essential since sufficiently large static error fields lead to discharge disruption. The error fields are originated not only by magnets fabrication and installation tolerances, by the joints and by the busbars, but also by the presence of ferromagnetic elements. It was shown that superconducting joints, feeders and busbars play a secondary effect; however in order to estimate of the importance of each possible error field source, rough evaluations can be very useful because it can provide an order of magnitude of the correspondent effect and, therefore, a ranking in the request for in depth analysis. The paper proposes a two steps procedure. The first step aims to get the approximate magnetization state of ferromagnetic parts; the second aims to estimate the full 3D error field over the whole volume using equivalent sources for magnetic masses and taking advantage from well assessed approximate closed form expressions, well suited for the far distance effects.

  14. Barriers to medical error reporting

    Directory of Open Access Journals (Sweden)

    Jalal Poorolajal

    2015-01-01

    Full Text Available Background: This study was conducted to explore the prevalence of medical error underreporting and associated barriers. Methods: This cross-sectional study was performed from September to December 2012. Five hospitals, affiliated with Hamadan University of Medical Sciences, in Hamedan,Iran were investigated. A self-administered questionnaire was used for data collection. Participants consisted of physicians, nurses, midwives, residents, interns, and staffs of radiology and laboratory departments. Results: Overall, 50.26% of subjects had committed but not reported medical errors. The main reasons mentioned for underreporting were lack of effective medical error reporting system (60.0%, lack of proper reporting form (51.8%, lack of peer supporting a person who has committed an error (56.0%, and lack of personal attention to the importance of medical errors (62.9%. The rate of committing medical errors was higher in men (71.4%, age of 50-40 years (67.6%, less-experienced personnel (58.7%, educational level of MSc (87.5%, and staff of radiology department (88.9%. Conclusions: This study outlined the main barriers to reporting medical errors and associated factors that may be helpful for healthcare organizations in improving medical error reporting as an essential component for patient safety enhancement.

  15. Errors in the administration of intravenous medication in Brazilian hospitals.

    Science.gov (United States)

    Anselmi, Maria Luiza; Peduzzi, Marina; Dos Santos, Claudia Benedita

    2007-10-01

    To verify the frequency of errors in the preparation and administration of intravenous medication in three Brazilian hospitals in the State of Bahia. The administration of intravenous medications constitutes a central activity in Brazilian nursing. Errors in performing this activity may result in irreparable damage to patients and may compromise the quality of care. Cross-sectional study, conducted in three hospitals in the State of Bahia, Brazil. Direct observation of the nursing staff (nurse technicians, auxiliary nurses and nurse attendants), preparing and administering intravenous medication. When preparing medication, wrong patient error did not occur in any of the three hospitals, whereas omission dose was the most frequent error in all study sites. When administering medication, the most frequent errors in the three hospitals were wrong dose and omission dose. The rates of error found are considered low compared with similar studies. The most frequent types of errors were wrong dose and omission dose. The hospitals studied showed different results with the smallest rates of errors occurring in hospital 1 that presented the best working conditions. Relevance to clinical practice. Studies such as this one have the potential to improve the quality of care.

  16. Operator error and emotions. Operator error and emotions - a major cause of human failure

    Energy Technology Data Exchange (ETDEWEB)

    Patterson, B.K. [Human Factors Practical Incorporated (Canada); Bradley, M. [Univ. of New Brunswick, Saint John, New Brunswick (Canada); Artiss, W.G. [Human Factors Practical (Canada)

    2000-07-01

    This paper proposes the idea that a large proportion of the incidents attributed to operator and maintenance error in a nuclear or industrial plant are actually founded in our human emotions. Basic psychological theory of emotions is briefly presented and then the authors present situations and instances that can cause emotions to swell and lead to operator and maintenance error. Since emotional information is not recorded in industrial incident reports, the challenge is extended to industry, to review incident source documents for cases of emotional involvement and to develop means to collect emotion related information in future root cause analysis investigations. Training must then be provided to operators and maintainers to enable them to know one's emotions, manage emotions, motivate one's self, recognize emotions in others and handle relationships. Effective training will reduce the instances of human error based in emotions and enable a cooperative, productive environment in which to work. (author)

  17. Operator error and emotions. Operator error and emotions - a major cause of human failure

    International Nuclear Information System (INIS)

    Patterson, B.K.; Bradley, M.; Artiss, W.G.

    2000-01-01

    This paper proposes the idea that a large proportion of the incidents attributed to operator and maintenance error in a nuclear or industrial plant are actually founded in our human emotions. Basic psychological theory of emotions is briefly presented and then the authors present situations and instances that can cause emotions to swell and lead to operator and maintenance error. Since emotional information is not recorded in industrial incident reports, the challenge is extended to industry, to review incident source documents for cases of emotional involvement and to develop means to collect emotion related information in future root cause analysis investigations. Training must then be provided to operators and maintainers to enable them to know one's emotions, manage emotions, motivate one's self, recognize emotions in others and handle relationships. Effective training will reduce the instances of human error based in emotions and enable a cooperative, productive environment in which to work. (author)

  18. Correcting AUC for Measurement Error.

    Science.gov (United States)

    Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang

    2015-12-01

    Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.

  19. Cognitive aspect of diagnostic errors.

    Science.gov (United States)

    Phua, Dong Haur; Tan, Nigel C K

    2013-01-01

    Diagnostic errors can result in tangible harm to patients. Despite our advances in medicine, the mental processes required to make a diagnosis exhibits shortcomings, causing diagnostic errors. Cognitive factors are found to be an important cause of diagnostic errors. With new understanding from psychology and social sciences, clinical medicine is now beginning to appreciate that our clinical reasoning can take the form of analytical reasoning or heuristics. Different factors like cognitive biases and affective influences can also impel unwary clinicians to make diagnostic errors. Various strategies have been proposed to reduce the effect of cognitive biases and affective influences when clinicians make diagnoses; however evidence for the efficacy of these methods is still sparse. This paper aims to introduce the reader to the cognitive aspect of diagnostic errors, in the hope that clinicians can use this knowledge to improve diagnostic accuracy and patient outcomes.

  20. Other relevant biological papers

    International Nuclear Information System (INIS)

    Shimizu, M.

    1989-01-01

    A considerable number of CRESP-relevant papers concerning deep-sea biology and radioecology have been published. It is the purpose of this study to call attention to them. They fall into three general categories. The first is papers of general interest. They are mentioned only briefly, and include text references to the global bibliography at the end of the volume. The second are papers that are not only mentioned and referenced, but for various reasons are described in abstract form. The last is a list of papers compiled by H.S.J. Roe specifically for this volume. They are listed in bibliographic form, and are also included in the global bibliography at the end of the volume

  1. Detecting Soft Errors in Stencil based Computations

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, V. [Univ. of Utah, Salt Lake City, UT (United States); Gopalkrishnan, G. [Univ. of Utah, Salt Lake City, UT (United States); Bronevetsky, G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-05-06

    Given the growing emphasis on system resilience, it is important to develop software-level error detectors that help trap hardware-level faults with reasonable accuracy while minimizing false alarms as well as the performance overhead introduced. We present a technique that approaches this idea by taking stencil computations as our target, and synthesizing detectors based on machine learning. In particular, we employ linear regression to generate computationally inexpensive models which form the basis for error detection. Our technique has been incorporated into a new open-source library called SORREL. In addition to reporting encouraging experimental results, we demonstrate techniques that help reduce the size of training data. We also discuss the efficacy of various detectors synthesized, as well as our future plans.

  2. Recognition of medical errors' reporting system dimensions in educational hospitals.

    Science.gov (United States)

    Yarmohammadian, Mohammad H; Mohammadinia, Leila; Tavakoli, Nahid; Ghalriz, Parvin; Haghshenas, Abbas

    2014-01-01

    Nowadays medical errors are one of the serious issues in the health-care system and carry to account of the patient's safety threat. The most important step for achieving safety promotion is identifying errors and their causes in order to recognize, correct and omit them. Concerning about repeating medical errors and harms, which were received via theses errors concluded to designing and establishing medical error reporting systems for hospitals and centers that are presenting therapeutic services. The aim of this study is the recognition of medical errors' reporting system dimensions in educational hospitals. This research is a descriptive-analytical and qualities' study, which has been carried out in Shahid Beheshti educational therapeutic center in Isfahan during 2012. In this study, relevant information was collected through 15 face to face interviews. That each of interviews take place in about 1hr and creation of five focused discussion groups through 45 min for each section, they were composed of Metron, educational supervisor, health officer, health education, and all of the head nurses. Concluded data interviews and discussion sessions were coded, then achieved results were extracted in the presence of clear-sighted persons and after their feedback perception, they were categorized. In order to make sure of information correctness, tables were presented to the research's interviewers and final the corrections were confirmed based on their view. The extracted information from interviews and discussion groups have been divided into nine main categories after content analyzing and subject coding and their subsets have been completely expressed. Achieved dimensions are composed of nine domains of medical error concept, error cases according to nurses' prospection, medical error reporting barriers, employees' motivational factors for error reporting, purposes of medical error reporting system, error reporting's challenges and opportunities, a desired system

  3. Mismeasurement and the resonance of strong confounders: correlated errors.

    Science.gov (United States)

    Marshall, J R; Hastrup, J L; Ross, J S

    1999-07-01

    Confounding in epidemiology, and the limits of standard methods of control for an imperfectly measured confounder, have been understood for some time. However, most treatments of this problem are based on the assumption that errors of measurement in confounding and confounded variables are independent. This paper considers the situation in which a strong risk factor (confounder) and an inconsequential but suspected risk factor (confounded) are each measured with errors that are correlated; the situation appears especially likely to occur in the field of nutritional epidemiology. Error correlation appears to add little to measurement error as a source of bias in estimating the impact of a strong risk factor: it can add to, diminish, or reverse the bias induced by measurement error in estimating the impact of the inconsequential risk factor. Correlation of measurement errors can add to the difficulty involved in evaluating structures in which confounding and measurement error are present. In its presence, observed correlations among risk factors can be greater than, less than, or even opposite to the true correlations. Interpretation of multivariate epidemiologic structures in which confounding is likely requires evaluation of measurement error structures, including correlations among measurement errors.

  4. Estimating the Autocorrelated Error Model with Trended Data: Further Results,

    Science.gov (United States)

    1979-11-01

    Perhaps the most serious deficiency of OLS in the presence of autocorrelation is not inefficiency but bias in its estimated standard errors--a bias...k for all t has variance var(b) = o2/ Tk2 2This refutes Maeshiro’s (1976) conjecture that "an estimator utilizing relevant extraneous information

  5. Understanding Teamwork in Trauma Resuscitation through Analysis of Team Errors

    Science.gov (United States)

    Sarcevic, Aleksandra

    2009-01-01

    An analysis of human errors in complex work settings can lead to important insights into the workspace design. This type of analysis is particularly relevant to safety-critical, socio-technical systems that are highly dynamic, stressful and time-constrained, and where failures can result in catastrophic societal, economic or environmental…

  6. Clinical relevance of pharmacist intervention in an emergency department.

    Science.gov (United States)

    Pérez-Moreno, Maria Antonia; Rodríguez-Camacho, Juan Manuel; Calderón-Hernanz, Beatriz; Comas-Díaz, Bernardino; Tarradas-Torras, Jordi

    2017-08-01

    To evaluate the clinical relevance of pharmacist intervention on patient care in emergencies, to determine the severity of detected errors. Second, to analyse the most frequent types of interventions and type of drugs involved and to evaluate the clinical pharmacist's activity. A 6-month observational prospective study of pharmacist intervention in the Emergency Department (ED) at a 400-bed hospital in Spain was performed to record interventions carried out by the clinical pharmacists. We determined whether the intervention occurred in the process of medication reconciliation or another activity, and whether the drug involved belonged to the High-Alert Medications Institute for Safe Medication Practices (ISMP) list. To evaluate the severity of the errors detected and clinical relevance of the pharmacist intervention, a modified assessment scale of Overhage and Lukes was used. Relationship between clinical relevance of pharmacist intervention and the severity of medication errors was assessed using ORs and Spearman's correlation coefficient. During the observation period, pharmacists reviewed the pharmacotherapy history and medication orders of 2984 patients. A total of 991 interventions were recorded in 557 patients; 67.2% of the errors were detected during medication reconciliation. Medication errors were considered severe in 57.2% of cases and 64.9% of pharmacist intervention were considered relevant. About 10.9% of the drugs involved are in the High-Alert Medications ISMP list. The severity of the medication error and the clinical significance of the pharmacist intervention were correlated (Spearman's ρ=0.728/pclinical pharmacists identified and intervened on a high number of severe medication errors. This suggests that emergency services will benefit from pharmacist-provided drug therapy services. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  7. Content Validity of a Tool Measuring Medication Errors.

    Science.gov (United States)

    Tabassum, Nishat; Allana, Saleema; Saeed, Tanveer; Dias, Jacqueline Maria

    2015-08-01

    The objective of this study was to determine content and face validity of a tool measuring medication errors among nursing students in baccalaureate nursing education. Data was collected from the Aga Khan University School of Nursing and Midwifery (AKUSoNaM), Karachi, from March to August 2014. The tool was developed utilizing literature and the expertise of the team members, expert in different areas. The developed tool was then sent to five experts from all over Karachi for ensuring the content validity of the tool, which was measured on relevance and clarity of the questions. The Scale Content Validity Index (S-CVI) for clarity and relevance of the questions was found to be 0.94 and 0.98, respectively. The tool measuring medication errors has an excellent content validity. This tool should be used for future studies on medication errors, with different study populations such as medical students, doctors, and nurses.

  8. Improving Bayesian credibility intervals for classifier error rates using maximum entropy empirical priors.

    Science.gov (United States)

    Gustafsson, Mats G; Wallman, Mikael; Wickenberg Bolin, Ulrika; Göransson, Hanna; Fryknäs, M; Andersson, Claes R; Isaksson, Anders

    2010-06-01

    Successful use of classifiers that learn to make decisions from a set of patient examples require robust methods for performance estimation. Recently many promising approaches for determination of an upper bound for the error rate of a single classifier have been reported but the Bayesian credibility interval (CI) obtained from a conventional holdout test still delivers one of the tightest bounds. The conventional Bayesian CI becomes unacceptably large in real world applications where the test set sizes are less than a few hundred. The source of this problem is that fact that the CI is determined exclusively by the result on the test examples. In other words, there is no information at all provided by the uniform prior density distribution employed which reflects complete lack of prior knowledge about the unknown error rate. Therefore, the aim of the study reported here was to study a maximum entropy (ME) based approach to improved prior knowledge and Bayesian CIs, demonstrating its relevance for biomedical research and clinical practice. It is demonstrated how a refined non-uniform prior density distribution can be obtained by means of the ME principle using empirical results from a few designs and tests using non-overlapping sets of examples. Experimental results show that ME based priors improve the CIs when employed to four quite different simulated and two real world data sets. An empirically derived ME prior seems promising for improving the Bayesian CI for the unknown error rate of a designed classifier. Copyright 2010 Elsevier B.V. All rights reserved.

  9. Error analysis for 1-1/2-loop semiscale system isothermal test data

    International Nuclear Information System (INIS)

    Feldman, E.M.; Naff, S.A.

    1975-05-01

    An error analysis was performed on the measurements made during the isothermal portion of the Semiscale Blowdown and Emergency Core Cooling (ECC) Project. A brief description of the measurement techniques employed, identification of potential sources of errors, and quantification of the errors associated with data is presented. (U.S.)

  10. Article Errors in the English Writing of Saudi EFL Preparatory Year Students

    Science.gov (United States)

    Alhaisoni, Eid; Gaudel, Daya Ram; Al-Zuoud, Khalid M.

    2017-01-01

    This study aims at providing a comprehensive account of the types of errors produced by Saudi EFL students enrolled in the preparatory year programe in their use of articles, based on the Surface Structure Taxonomies (SST) of errors. The study describes the types, frequency and sources of the definite and indefinite article errors in writing…

  11. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    Science.gov (United States)

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. User perspectives on relevance criteria

    DEFF Research Database (Denmark)

    Maglaughlin, Kelly L.; Sonnenwald, Diane H.

    2002-01-01

    , partially relevant, or not relevant to their information need; and explained their decisions in an interview. Analysis revealed 29 criteria, discussed positively and negatively, that were used by the participants when selecting passages that contributed or detracted from a document's relevance......This study investigates the use of criteria to assess relevant, partially relevant, and not-relevant documents. Study participants identified passages within 20 document representations that they used to make relevance judgments; judged each document representation as a whole to be relevant...... matter, thought catalyst), full text (e.g., audience, novelty, type, possible content, utility), journal/publisher (e.g., novelty, main focus, perceived quality), and personal (e.g., competition, time requirements). Results further indicate that multiple criteria are used when making relevant, partially...

  13. Human errors in NPP operations

    International Nuclear Information System (INIS)

    Sheng Jufang

    1993-01-01

    Based on the operational experiences of nuclear power plants (NPPs), the importance of studying human performance problems is described. Statistical analysis on the significance or frequency of various root-causes and error-modes from a large number of human-error-related events demonstrate that the defects in operation/maintenance procedures, working place factors, communication and training practices are primary root-causes, while omission, transposition, quantitative mistake are the most frequent among the error-modes. Recommendations about domestic research on human performance problem in NPPs are suggested

  14. Linear network error correction coding

    CERN Document Server

    Guang, Xuan

    2014-01-01

    There are two main approaches in the theory of network error correction coding. In this SpringerBrief, the authors summarize some of the most important contributions following the classic approach, which represents messages by sequences?similar to algebraic coding,?and also briefly discuss the main results following the?other approach,?that uses the theory of rank metric codes for network error correction of representing messages by subspaces. This book starts by establishing the basic linear network error correction (LNEC) model and then characterizes two equivalent descriptions. Distances an

  15. Theoretical-and experimental analysis of the errors involved in the wood moisture determination by gamma-ray attenuation

    International Nuclear Information System (INIS)

    Aguiar, O.

    1983-01-01

    The sources of errors in wood moisture determination by gamma-ray attenuation were sought. Equations were proposed for determining errors and for ideal sample thickness. A series of measurements of moisture content in wood samples of Pinus oocarpa was made and the experimental errors were compared with the theoretical errors. (Author) [pt

  16. Economic impact of medication error: a systematic review.

    Science.gov (United States)

    Walsh, Elaine K; Hansen, Christina Raae; Sahm, Laura J; Kearney, Patricia M; Doherty, Edel; Bradley, Colin P

    2017-05-01

    Medication error is a significant source of morbidity and mortality among patients. Clinical and cost-effectiveness evidence are required for the implementation of quality of care interventions. Reduction of error-related cost is a key potential benefit of interventions addressing medication error. The aim of this review was to describe and quantify the economic burden associated with medication error. PubMed, Cochrane, Embase, CINAHL, EconLit, ABI/INFORM, Business Source Complete were searched. Studies published 2004-2016 assessing the economic impact of medication error were included. Cost values were expressed in Euro 2015. A narrative synthesis was performed. A total of 4572 articles were identified from database searching, and 16 were included in the review. One study met all applicable quality criteria. Fifteen studies expressed economic impact in monetary terms. Mean cost per error per study ranged from €2.58 to €111 727.08. Healthcare costs were used to measure economic impact in 15 of the included studies with one study measuring litigation costs. Four studies included costs incurred in primary care with the remaining 12 measuring hospital costs. Five studies looked at general medication error in a general population with 11 studies reporting the economic impact of an individual type of medication error or error within a specific patient population. Considerable variability existed between studies in terms of financial cost, patients, settings and errors included. Many were of poor quality. Assessment of economic impact was conducted predominantly in the hospital setting with little assessment of primary care impact. Limited parameters were used to establish economic impact. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  17. Field testing for cosmic ray soft errors in semiconductor memories

    International Nuclear Information System (INIS)

    O'Gorman, T.J.; Ross, J.M.; Taber, A.H.; Ziegler, J.F.; Muhlfeld, H.P.; Montrose, C.J.; Curtis, H.W.; Walsh, J.L.

    1996-01-01

    This paper presents a review of experiments performed by IBM to investigate the causes of soft errors in semiconductor memory chips under field test conditions. The effects of alpha-particles and cosmic rays are separated by comparing multiple measurements of the soft-error rate (SER) of samples of memory chips deep underground and at various altitudes above the earth. The results of case studies on four different memory chips show that cosmic rays are an important source of the ionizing radiation that causes soft errors. The results of field testing are used to confirm the accuracy of the modeling and the accelerated testing of chips

  18. Error field considerations for BPX

    International Nuclear Information System (INIS)

    LaHaye, R.J.

    1992-01-01

    Irregularities in the position of poloidal and/or toroidal field coils in tokamaks produce resonant toroidal asymmetries in the vacuum magnetic fields. Otherwise stable tokamak discharges become non-linearly unstable to disruptive locked modes when subjected to low level error fields. Because of the field errors, magnetic islands are produced which would not otherwise occur in tearing mode table configurations; a concomitant reduction of the total confinement can result. Poloidal and toroidal asymmetries arise in the heat flux to the divertor target. In this paper, the field errors from perturbed BPX coils are used in a field line tracing code of the BPX equilibrium to study these deleterious effects. Limits on coil irregularities for device design and fabrication are computed along with possible correcting coils for reducing such field errors

  19. The uncorrected refractive error challenge

    Directory of Open Access Journals (Sweden)

    Kovin Naidoo

    2016-11-01

    Full Text Available Refractive error affects people of all ages, socio-economic status and ethnic groups. The most recent statistics estimate that, worldwide, 32.4 million people are blind and 191 million people have vision impairment. Vision impairment has been defined based on distance visual acuity only, and uncorrected distance refractive error (mainly myopia is the single biggest cause of worldwide vision impairment. However, when we also consider near visual impairment, it is clear that even more people are affected. From research it was estimated that the number of people with vision impairment due to uncorrected distance refractive error was 107.8 million,1 and the number of people affected by uncorrected near refractive error was 517 million, giving a total of 624.8 million people.

  20. Quantile Regression With Measurement Error

    KAUST Repository

    Wei, Ying

    2009-08-27

    Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.

  1. Comprehensive Error Rate Testing (CERT)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services (CMS) implemented the Comprehensive Error Rate Testing (CERT) program to measure improper payments in the Medicare...

  2. Numerical optimization with computational errors

    CERN Document Server

    Zaslavski, Alexander J

    2016-01-01

    This book studies the approximate solutions of optimization problems in the presence of computational errors. A number of results are presented on the convergence behavior of algorithms in a Hilbert space; these algorithms are examined taking into account computational errors. The author illustrates that algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. Known computational errors are examined with the aim of determining an approximate solution. Researchers and students interested in the optimization theory and its applications will find this book instructive and informative. This monograph contains 16 chapters; including a chapters devoted to the subgradient projection algorithm, the mirror descent algorithm, gradient projection algorithm, the Weiszfelds method, constrained convex minimization problems, the convergence of a proximal point method in a Hilbert space, the continuous subgradient method, penalty methods and Newton’s meth...

  3. Medication administration errors in an intensive care unit in Ethiopia

    Directory of Open Access Journals (Sweden)

    Agalu Asrat

    2012-05-01

    Full Text Available Abstract Background Medication administration errors in patient care have been shown to be frequent and serious. Such errors are particularly prevalent in highly technical specialties such as the intensive care unit (ICU. In Ethiopia, the prevalence of medication administration errors in the ICU is not studied. Objective To assess medication administration errors in the intensive care unit of Jimma University Specialized Hospital (JUSH, Southwest Ethiopia. Methods Prospective observation based cross-sectional study was conducted in the ICU of JUSH from February 7 to March 24, 2011. All medication interventions administered by the nurses to all patients admitted to the ICU during the study period were included in the study. Data were collected by directly observing drug administration by the nurses supplemented with review of medication charts. Data was edited, coded and entered in to SPSS for windows version 16.0. Descriptive statistics was used to measure the magnitude and type of the problem under study. Results Prevalence of medication administration errors in the ICU of JUSH was 621 (51.8%. Common administration errors were attributed to wrong timing (30.3%, omission due to unavailability (29.0% and missed doses (18.3% among others. Errors associated with antibiotics took the lion's share in medication administration errors (36.7%. Conclusion Medication errors at the administration phase were highly prevalent in the ICU of Jimma University Specialized Hospital. Supervision to the nurses administering medications by more experienced ICU nurses or other relevant professionals in regular intervals is helpful in ensuring that medication errors don’t occur as frequently as observed in this study.

  4. Propagation of angular errors in two-axis rotation systems

    Science.gov (United States)

    Torrington, Geoffrey K.

    2003-10-01

    Two-Axis Rotation Systems, or "goniometers," are used in diverse applications including telescope pointing, automotive headlamp testing, and display testing. There are three basic configurations in which a goniometer can be built depending on the orientation and order of the stages. Each configuration has a governing set of equations which convert motion between the system "native" coordinates to other base systems, such as direction cosines, optical field angles, or spherical-polar coordinates. In their simplest form, these equations neglect errors present in real systems. In this paper, a statistical treatment of error source propagation is developed which uses only tolerance data, such as can be obtained from the system mechanical drawings prior to fabrication. It is shown that certain error sources are fully correctable, partially correctable, or uncorrectable, depending upon the goniometer configuration and zeroing technique. The system error budget can be described by a root-sum-of-squares technique with weighting factors describing the sensitivity of each error source. This paper tabulates weighting factors at 67% (k=1) and 95% (k=2) confidence for various levels of maximum travel for each goniometer configuration. As a practical example, this paper works through an error budget used for the procurement of a system at Sandia National Laboratories.

  5. Error correcting coding for OTN

    DEFF Research Database (Denmark)

    Justesen, Jørn; Larsen, Knud J.; Pedersen, Lars A.

    2010-01-01

    Forward error correction codes for 100 Gb/s optical transmission are currently receiving much attention from transport network operators and technology providers. We discuss the performance of hard decision decoding using product type codes that cover a single OTN frame or a small number...... of such frames. In particular we argue that a three-error correcting BCH is the best choice for the component code in such systems....

  6. Negligence, genuine error, and litigation

    OpenAIRE

    Sohn DH

    2013-01-01

    David H SohnDepartment of Orthopedic Surgery, University of Toledo Medical Center, Toledo, OH, USAAbstract: Not all medical injuries are the result of negligence. In fact, most medical injuries are the result either of the inherent risk in the practice of medicine, or due to system errors, which cannot be prevented simply through fear of disciplinary action. This paper will discuss the differences between adverse events, negligence, and system errors; the current medical malpractice tort syst...

  7. Eliminating US hospital medical errors.

    Science.gov (United States)

    Kumar, Sameer; Steinebach, Marc

    2008-01-01

    Healthcare costs in the USA have continued to rise steadily since the 1980s. Medical errors are one of the major causes of deaths and injuries of thousands of patients every year, contributing to soaring healthcare costs. The purpose of this study is to examine what has been done to deal with the medical-error problem in the last two decades and present a closed-loop mistake-proof operation system for surgery processes that would likely eliminate preventable medical errors. The design method used is a combination of creating a service blueprint, implementing the six sigma DMAIC cycle, developing cause-and-effect diagrams as well as devising poka-yokes in order to develop a robust surgery operation process for a typical US hospital. In the improve phase of the six sigma DMAIC cycle, a number of poka-yoke techniques are introduced to prevent typical medical errors (identified through cause-and-effect diagrams) that may occur in surgery operation processes in US hospitals. It is the authors' assertion that implementing the new service blueprint along with the poka-yokes, will likely result in the current medical error rate to significantly improve to the six-sigma level. Additionally, designing as many redundancies as possible in the delivery of care will help reduce medical errors. Primary healthcare providers should strongly consider investing in adequate doctor and nurse staffing, and improving their education related to the quality of service delivery to minimize clinical errors. This will lead to an increase in higher fixed costs, especially in the shorter time frame. This paper focuses additional attention needed to make a sound technical and business case for implementing six sigma tools to eliminate medical errors that will enable hospital managers to increase their hospital's profitability in the long run and also ensure patient safety.

  8. Approximation errors during variance propagation

    International Nuclear Information System (INIS)

    Dinsmore, Stephen

    1986-01-01

    Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given

  9. Sources of errors in the measurements of underwater profiling radiometer

    Digital Repository Service at National Institute of Oceanography (India)

    Silveira, N.; Suresh, T.; Talaulikar, M.; Desa, E.; Matondkar, S.G.P.; Lotlikar, A.

    to meet the stringent quality requirements of marine optical data for satellite ocean color sensor validation, development of algorithms and other related applications, it is very essential to take great care while measuring these parameters. There are two... of the pelican hook. The radiometer dives vertically and the cable is paid out with less tension, keeping in tandem with the descent of the radiometer while taking care to release only the required amount of cable. The operation of the release mechanism lever...

  10. [Medical errors: inevitable but preventable].

    Science.gov (United States)

    Giard, R W

    2001-10-27

    Medical errors are increasingly reported in the lay press. Studies have shown dramatic error rates of 10 percent or even higher. From a methodological point of view, studying the frequency and causes of medical errors is far from simple. Clinical decisions on diagnostic or therapeutic interventions are always taken within a clinical context. Reviewing outcomes of interventions without taking into account both the intentions and the arguments for a particular action will limit the conclusions from a study on the rate and preventability of errors. The interpretation of the preventability of medical errors is fraught with difficulties and probably highly subjective. Blaming the doctor personally does not do justice to the actual situation and especially the organisational framework. Attention for and improvement of the organisational aspects of error are far more important then litigating the person. To err is and will remain human and if we want to reduce the incidence of faults we must be able to learn from our mistakes. That requires an open attitude towards medical mistakes, a continuous effort in their detection, a sound analysis and, where feasible, the institution of preventive measures.

  11. Back to the basics: Identifying and addressing underlying challenges in achieving high quality and relevant health statistics for indigenous populations in Canada.

    Science.gov (United States)

    Smylie, Janet; Firestone, Michelle

    Canada is known internationally for excellence in both the quality and public policy relevance of its health and social statistics. There is a double standard however with respect to the relevance and quality of statistics for Indigenous populations in Canada. Indigenous specific health and social statistics gathering is informed by unique ethical, rights-based, policy and practice imperatives regarding the need for Indigenous participation and leadership in Indigenous data processes throughout the spectrum of indicator development, data collection, management, analysis and use. We demonstrate how current Indigenous data quality challenges including misclassification errors and non-response bias systematically contribute to a significant underestimate of inequities in health determinants, health status, and health care access between Indigenous and non-Indigenous people in Canada. The major quality challenge underlying these errors and biases is the lack of Indigenous specific identifiers that are consistent and relevant in major health and social data sources. The recent removal of an Indigenous identity question from the Canadian census has resulted in further deterioration of an already suboptimal system. A revision of core health data sources to include relevant, consistent, and inclusive Indigenous self-identification is urgently required. These changes need to be carried out in partnership with Indigenous peoples and their representative and governing organizations.

  12. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    of generic run-time error types, design of methods of observing application software behaviorduring execution and design of methods of evaluating run time constraints. In the definition of error types it is attempted to cover all relevant aspects of the application softwaree behavior. Methods of observation......In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...

  13. Comprehensive analysis of a medication dosing error related to CPOE.

    Science.gov (United States)

    Horsky, Jan; Kuperman, Gilad J; Patel, Vimla L

    2005-01-01

    This case study of a serious medication error demonstrates the necessity of a comprehensive methodology for the analysis of failures in interaction between humans and information systems. The authors used a novel approach to analyze a dosing error related to computer-based ordering of potassium chloride (KCl). The method included a chronological reconstruction of events and their interdependencies from provider order entry usage logs, semistructured interviews with involved clinicians, and interface usability inspection of the ordering system. Information collected from all sources was compared and evaluated to understand how the error evolved and propagated through the system. In this case, the error was the product of faults in interaction among human and system agents that methods limited in scope to their distinct analytical domains would not identify. The authors characterized errors in several converging aspects of the drug ordering process: confusing on-screen laboratory results review, system usability difficulties, user training problems, and suboptimal clinical system safeguards that all contributed to a serious dosing error. The results of the authors' analysis were used to formulate specific recommendations for interface layout and functionality modifications, suggest new user alerts, propose changes to user training, and address error-prone steps of the KCl ordering process to reduce the risk of future medication dosing errors.

  14. Cognitive and system factors contributing to diagnostic errors in radiology.

    Science.gov (United States)

    Lee, Cindy S; Nagy, Paul G; Weaver, Sallie J; Newman-Toker, David E

    2013-09-01

    In this article, we describe some of the cognitive and system-based sources of detection and interpretation errors in diagnostic radiology and discuss potential approaches to help reduce misdiagnoses. Every radiologist worries about missing a diagnosis or giving a false-positive reading. The retrospective error rate among radiologic examinations is approximately 30%, with real-time errors in daily radiology practice averaging 3-5%. Nearly 75% of all medical malpractice claims against radiologists are related to diagnostic errors. As medical reimbursement trends downward, radiologists attempt to compensate by undertaking additional responsibilities to increase productivity. The increased workload, rising quality expectations, cognitive biases, and poor system factors all contribute to diagnostic errors in radiology. Diagnostic errors are underrecognized and underappreciated in radiology practice. This is due to the inability to obtain reliable national estimates of the impact, the difficulty in evaluating effectiveness of potential interventions, and the poor response to systemwide solutions. Most of our clinical work is executed through type 1 processes to minimize cost, anxiety, and delay; however, type 1 processes are also vulnerable to errors. Instead of trying to completely eliminate cognitive shortcuts that serve us well most of the time, becoming aware of common biases and using metacognitive strategies to mitigate the effects have the potential to create sustainable improvement in diagnostic errors.

  15. Accounting for measurement error in biomarker data and misclassification of subtypes in the analysis of tumor data.

    Science.gov (United States)

    Nevo, Daniel; Zucker, David M; Tamimi, Rulla M; Wang, Molin

    2016-12-30

    A common paradigm in dealing with heterogeneity across tumors in cancer analysis is to cluster the tumors into subtypes using marker data on the tumor, and then to analyze each of the clusters separately. A more specific target is to investigate the association between risk factors and specific subtypes and to use the results for personalized preventive treatment. This task is usually carried out in two steps-clustering and risk factor assessment. However, two sources of measurement error arise in these problems. The first is the measurement error in the biomarker values. The second is the misclassification error when assigning observations to clusters. We consider the case with a specified set of relevant markers and propose a unified single-likelihood approach for normally distributed biomarkers. As an alternative, we consider a two-step procedure with the tumor type misclassification error taken into account in the second-step risk factor analysis. We describe our method for binary data and also for survival analysis data using a modified version of the Cox model. We present asymptotic theory for the proposed estimators. Simulation results indicate that our methods significantly lower the bias with a small price being paid in terms of variance. We present an analysis of breast cancer data from the Nurses' Health Study to demonstrate the utility of our method. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  16. Passage relevance models for genomics search

    Directory of Open Access Journals (Sweden)

    Frieder Ophir

    2009-03-01

    Full Text Available Abstract We present a passage relevance model for integrating syntactic and semantic evidence of biomedical concepts and topics using a probabilistic graphical model. Component models of topics, concepts, terms, and document are represented as potential functions within a Markov Random Field. The probability of a passage being relevant to a biologist's information need is represented as the joint distribution across all potential functions. Relevance model feedback of top ranked passages is used to improve distributional estimates of query concepts and topics in context, and a dimensional indexing strategy is used for efficient aggregation of concept and term statistics. By integrating multiple sources of evidence including dependencies between topics, concepts, and terms, we seek to improve genomics literature passage retrieval precision. Using this model, we are able to demonstrate statistically significant improvements in retrieval precision using a large genomics literature corpus.

  17. How to Avoid Errors in Error Propagation: Prediction Intervals and Confidence Intervals in Forest Biomass

    Science.gov (United States)

    Lilly, P.; Yanai, R. D.; Buckley, H. L.; Case, B. S.; Woollons, R. C.; Holdaway, R. J.; Johnson, J.

    2016-12-01

    Calculations of forest biomass and elemental content require many measurements and models, each contributing uncertainty to the final estimates. While sampling error is commonly reported, based on replicate plots, error due to uncertainty in the regression used to estimate biomass from tree diameter is usually not quantified. Some published estimates of uncertainty due to the regression models have used the uncertainty in the prediction of individuals, ignoring uncertainty in the mean, while others have propagated uncertainty in the mean while ignoring individual variation. Using the simple case of the calcium concentration of sugar maple leaves, we compare the variation among individuals (the standard deviation) to the uncertainty in the mean (the standard error) and illustrate the declining importance in the prediction of individual concentrations as the number of individuals increases. For allometric models, the analogous statistics are the prediction interval (or the residual variation in the model fit) and the confidence interval (describing the uncertainty in the best fit model). The effect of propagating these two sources of error is illustrated using the mass of sugar maple foliage. The uncertainty in individual tree predictions was large for plots with few trees; for plots with 30 trees or more, the uncertainty in individuals was less important than the uncertainty in the mean. Authors of previously published analyses have reanalyzed their data to show the magnitude of these two sources of uncertainty in scales ranging from experimental plots to entire countries. The most correct analysis will take both sources of uncertainty into account, but for practical purposes, country-level reports of uncertainty in carbon stocks, as required by the IPCC, can ignore the uncertainty in individuals. Ignoring the uncertainty in the mean will lead to exaggerated estimates of confidence in estimates of forest biomass and carbon and nutrient contents.

  18. Medication errors with the use of allopurinol and colchicine: a retrospective study of a national, anonymous Internet-accessible error reporting system.

    Science.gov (United States)

    Mikuls, Ted R; Curtis, Jeffrey R; Allison, Jeroan J; Hicks, Rodney W; Saag, Kenneth G

    2006-03-01

    To more closely assess medication errors in gout care, we examined data from a national, Internet-accessible error reporting program over a 5-year reporting period. We examined data from the MEDMARX database, covering the period from January 1, 1999 through December 31, 2003. For allopurinol and colchicine, we examined error severity, source, type, contributing factors, and healthcare personnel involved in errors, and we detailed errors resulting in patient harm. Causes of error and the frequency of other error characteristics were compared for gout medications versus other musculoskeletal treatments using the chi-square statistic. Gout medication errors occurred in 39% (n = 273) of facilities participating in the MEDMARX program. Reported errors were predominantly from the inpatient hospital setting and related to the use of allopurinol (n = 524), followed by colchicine (n = 315), probenecid (n = 50), and sulfinpyrazone (n = 2). Compared to errors involving other musculoskeletal treatments, allopurinol and colchicine errors were more often ascribed to problems with physician prescribing (7% for other therapies versus 23-39% for allopurinol and colchicine, p < 0.0001) and less often due to problems with drug administration or nursing error (50% vs 23-27%, p < 0.0001). Our results suggest that inappropriate prescribing practices are characteristic of errors occurring with the use of allopurinol and colchicine. Physician prescribing practices are a potential target for quality improvement interventions in gout care.

  19. Updating expected action outcome in the medial frontal cortex involves an evaluation of error type.

    Science.gov (United States)

    Maier, Martin E; Steinhauser, Marco

    2013-10-02

    Forming expectations about the outcome of an action is an important prerequisite for action control and reinforcement learning in the human brain. The medial frontal cortex (MFC) has been shown to play an important role in the representation of outcome expectations, particularly when an update of expected outcome becomes necessary because an error is detected. However, error detection alone is not always sufficient to compute expected outcome because errors can occur in various ways and different types of errors may be associated with different outcomes. In the present study, we therefore investigate whether updating expected outcome in the human MFC is based on an evaluation of error type. Our approach was to consider an electrophysiological correlate of MFC activity on errors, the error-related negativity (Ne/ERN), in a task in which two types of errors could occur. Because the two error types were associated with different amounts of monetary loss, updating expected outcomes on error trials required an evaluation of error type. Our data revealed a pattern of Ne/ERN amplitudes that closely mirrored the amount of monetary loss associated with each error type, suggesting that outcome expectations are updated based on an evaluation of error type. We propose that this is achieved by a proactive evaluation process that anticipates error types by continuously monitoring error sources or by dynamically representing possible response-outcome relations.

  20. Information Characteristics and Errors in Expectations

    DEFF Research Database (Denmark)

    Antoniou, Constantinos; Harrison, Glenn W.; Lau, Morten Igel

    Behavioural finance theories draw on evidence from psychology that suggest that some people respond to information in a biased manner, and construct theories of inefficient markets. However, these biases are not always robust when tested in economic conditions, which casts doubt on their relevance...... to market efficiency. We design an economic experiment to test a psychological hypothesis of errors in expectations widely cited in finance, which states that, in violations of Bayes Rule, some people respond more forcefully to the strength of an information signal. The strength of a signal is how saliently...... it supports a specific hypothesis, as opposed to its weight, which is its predictive validity. We find that the strength-weight bias affects expectations, but that its magnitude is three times lower than originally reported in the psychology literature. This suggests that its impact on financial markets...

  1. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    Science.gov (United States)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  2. Sources of sexual information and its relevance to sexual behavior ...

    African Journals Online (AJOL)

    Sex education of adolescents should, therefore be provided in a cultural, community-based setting of which the guardian programme should be only one component. ... Education sexuelle d'adolescents pourrait, donc soit préveu à la culture, ...

  3. An Empirical State Error Covariance Matrix for Batch State Estimation

    Science.gov (United States)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the

  4. The Language of Scholarship: How to Rapidly Locate and Avoid Common APA Errors.

    Science.gov (United States)

    Freysteinson, Wyona M; Krepper, Rebecca; Mellott, Susan

    2015-10-01

    This article is relevant for nurses and nursing students who are writing scholarly documents for work, school, or publication and who have a basic understanding of American Psychological Association (APA) style. Common APA errors on the reference list and in citations within the text are reviewed. Methods to quickly find and reduce those errors are shared. Copyright 2015, SLACK Incorporated.

  5. Redundant measurements for controlling errors

    International Nuclear Information System (INIS)

    Ehinger, M.H.; Crawford, J.M.; Madeen, M.L.

    1979-07-01

    Current federal regulations for nuclear materials control require consideration of operating data as part of the quality control program and limits of error propagation. Recent work at the BNFP has revealed that operating data are subject to a number of measurement problems which are very difficult to detect and even more difficult to correct in a timely manner. Thus error estimates based on operational data reflect those problems. During the FY 1978 and FY 1979 R and D demonstration runs at the BNFP, redundant measurement techniques were shown to be effective in detecting these problems to allow corrective action. The net effect is a reduction in measurement errors and a significant increase in measurement sensitivity. Results show that normal operation process control measurements, in conjunction with routine accountability measurements, are sensitive problem indicators when incorporated in a redundant measurement program

  6. Large errors and severe conditions

    CERN Document Server

    Smith, D L; Van Wormer, L A

    2002-01-01

    Physical parameters that can assume real-number values over a continuous range are generally represented by inherently positive random variables. However, if the uncertainties in these parameters are significant (large errors), conventional means of representing and manipulating the associated variables can lead to erroneous results. Instead, all analyses involving them must be conducted in a probabilistic framework. Several issues must be considered: First, non-linear functional relations between primary and derived variables may lead to significant 'error amplification' (severe conditions). Second, the commonly used normal (Gaussian) probability distribution must be replaced by a more appropriate function that avoids the occurrence of negative sampling results. Third, both primary random variables and those derived through well-defined functions must be dealt with entirely in terms of their probability distributions. Parameter 'values' and 'errors' should be interpreted as specific moments of these probabil...

  7. Negligence, genuine error, and litigation

    Directory of Open Access Journals (Sweden)

    Sohn DH

    2013-02-01

    Full Text Available David H SohnDepartment of Orthopedic Surgery, University of Toledo Medical Center, Toledo, OH, USAAbstract: Not all medical injuries are the result of negligence. In fact, most medical injuries are the result either of the inherent risk in the practice of medicine, or due to system errors, which cannot be prevented simply through fear of disciplinary action. This paper will discuss the differences between adverse events, negligence, and system errors; the current medical malpractice tort system in the United States; and review current and future solutions, including medical malpractice reform, alternative dispute resolution, health courts, and no-fault compensation systems. The current political environment favors investigation of non-cap tort reform remedies; investment into more rational oversight systems, such as health courts or no-fault systems may reap both quantitative and qualitative benefits for a less costly and safer health system.Keywords: medical malpractice, tort reform, no fault compensation, alternative dispute resolution, system errors

  8. Spacecraft and propulsion technician error

    Science.gov (United States)

    Schultz, Daniel Clyde

    Commercial aviation and commercial space similarly launch, fly, and land passenger vehicles. Unlike aviation, the U.S. government has not established maintenance policies for commercial space. This study conducted a mixed methods review of 610 U.S. space launches from 1984 through 2011, which included 31 failures. An analysis of the failure causal factors showed that human error accounted for 76% of those failures, which included workmanship error accounting for 29% of the failures. With the imminent future of commercial space travel, the increased potential for the loss of human life demands that changes be made to the standardized procedures, training, and certification to reduce human error and failure rates. Several recommendations were made by this study to the FAA's Office of Commercial Space Transportation, space launch vehicle operators, and maintenance technician schools in an effort to increase the safety of the space transportation passengers.

  9. Sensation seeking and error processing.

    Science.gov (United States)

    Zheng, Ya; Sheng, Wenbin; Xu, Jing; Zhang, Yuanyuan

    2014-09-01

    Sensation seeking is defined by a strong need for varied, novel, complex, and intense stimulation, and a willingness to take risks for such experience. Several theories propose that the insensitivity to negative consequences incurred by risks is one of the hallmarks of sensation-seeking behaviors. In this study, we investigated the time course of error processing in sensation seeking by recording event-related potentials (ERPs) while high and low sensation seekers performed an Eriksen flanker task. Whereas there were no group differences in ERPs to correct trials, sensation seeking was associated with a blunted error-related negativity (ERN), which was female-specific. Further, different subdimensions of sensation seeking were related to ERN amplitude differently. These findings indicate that the relationship between sensation seeking and error processing is sex-specific. Copyright © 2014 Society for Psychophysiological Research.

  10. Stochastic and sensitivity analysis of shape error of inflatable antenna reflectors

    Science.gov (United States)

    San, Bingbing; Yang, Qingshan; Yin, Liwei

    2017-03-01

    Inflatable antennas are promising candidates to realize future satellite communications and space observations since they are lightweight, low-cost and small-packaged-volume. However, due to their high flexibility, inflatable reflectors are difficult to manufacture accurately, which may result in undesirable shape errors, and thus affect their performance negatively. In this paper, the stochastic characteristics of shape errors induced during manufacturing process are investigated using Latin hypercube sampling coupled with manufacture simulations. Four main random error sources are involved, including errors in membrane thickness, errors in elastic modulus of membrane, boundary deviations and pressure variations. Using regression and correlation analysis, a global sensitivity study is conducted to rank the importance of these error sources. This global sensitivity analysis is novel in that it can take into account the random variation and the interaction between error sources. Analyses are parametrically carried out with various focal-length-to-diameter ratios (F/D) and aperture sizes (D) of reflectors to investigate their effects on significance ranking of error sources. The research reveals that RMS (Root Mean Square) of shape error is a random quantity with an exponent probability distribution and features great dispersion; with the increase of F/D and D, both mean value and standard deviation of shape errors are increased; in the proposed range, the significance ranking of error sources is independent of F/D and D; boundary deviation imposes the greatest effect with a much higher weight than the others; pressure variation ranks the second; error in thickness and elastic modulus of membrane ranks the last with very close sensitivities to pressure variation. Finally, suggestions are given for the control of the shape accuracy of reflectors and allowable values of error sources are proposed from the perspective of reliability.

  11. Community Pharmacists' Perception of the Relevance of Drug ...

    African Journals Online (AJOL)

    HP

    Community Pharmacists' Perception of the Relevance of. Drug Package Insert as Source of Drug Information in. Southwestern Nigeria. Kenechuckwu Diobi, Titilayo O Fakeye* and Rasaq Adisa. Department of Clinical Pharmacy & Pharmacy Administration, Faculty of Pharmacy, University of Ibadan, Ibadan, Nigeria.

  12. Errors of Inference Due to Errors of Measurement.

    Science.gov (United States)

    Linn, Robert L.; Werts, Charles E.

    Failure to consider errors of measurement when using partial correlation or analysis of covariance techniques can result in erroneous conclusions. Certain aspects of this problem are discussed and particular attention is given to issues raised in a recent article by Brewar, Campbell, and Crano. (Author)

  13. Measurement error models with uncertainty about the error variance

    NARCIS (Netherlands)

    Oberski, D.L.; Satorra, A.

    2013-01-01

    It is well known that measurement error in observable variables induces bias in estimates in standard regression analysis and that structural equation models are a typical solution to this problem. Often, multiple indicator equations are subsumed as part of the structural equation model, allowing

  14. Reward positivity: Reward prediction error or salience prediction error?

    Science.gov (United States)

    Heydari, Sepideh; Holroyd, Clay B

    2016-08-01

    The reward positivity is a component of the human ERP elicited by feedback stimuli in trial-and-error learning and guessing tasks. A prominent theory holds that the reward positivity reflects a reward prediction error signal that is sensitive to outcome valence, being larger for unexpected positive events relative to unexpected negative events (Holroyd & Coles, 2002). Although the theory has found substantial empirical support, most of these studies have utilized either monetary or performance feedback to test the hypothesis. However, in apparent contradiction to the theory, a recent study found that unexpected physical punishments also elicit the reward positivity (Talmi, Atkinson, & El-Deredy, 2013). The authors of this report argued that the reward positivity reflects a salience prediction error rather than a reward prediction error. To investigate this finding further, in the present study participants navigated a virtual T maze and received feedback on each trial under two conditions. In a reward condition, the feedback indicated that they would either receive a monetary reward or not and in a punishment condition the feedback indicated that they would receive a small shock or not. We found that the feedback stimuli elicited a typical reward positivity in the reward condition and an apparently delayed reward positivity in the punishment condition. Importantly, this signal was more positive to the stimuli that predicted the omission of a possible punishment relative to stimuli that predicted a forthcoming punishment, which is inconsistent with the salience hypothesis. © 2016 Society for Psychophysiological Research.

  15. ERROR HANDLING IN INTEGRATION WORKFLOWS

    Directory of Open Access Journals (Sweden)

    Alexey M. Nazarenko

    2017-01-01

    Full Text Available Simulation experiments performed while solving multidisciplinary engineering and scientific problems require joint usage of multiple software tools. Further, when following a preset plan of experiment or searching for optimum solu- tions, the same sequence of calculations is run multiple times with various simulation parameters, input data, or conditions while overall workflow does not change. Automation of simulations like these requires implementing of a workflow where tool execution and data exchange is usually controlled by a special type of software, an integration environment or plat- form. The result is an integration workflow (a platform-dependent implementation of some computing workflow which, in the context of automation, is a composition of weakly coupled (in terms of communication intensity typical subtasks. These compositions can then be decomposed back into a few workflow patterns (types of subtasks interaction. The pat- terns, in their turn, can be interpreted as higher level subtasks.This paper considers execution control and data exchange rules that should be imposed by the integration envi- ronment in the case of an error encountered by some integrated software tool. An error is defined as any abnormal behavior of a tool that invalidates its result data thus disrupting the data flow within the integration workflow. The main requirementto the error handling mechanism implemented by the integration environment is to prevent abnormal termination of theentire workflow in case of missing intermediate results data. Error handling rules are formulated on the basic pattern level and on the level of a composite task that can combine several basic patterns as next level subtasks. The cases where workflow behavior may be different, depending on user's purposes, when an error takes place, and possible error handling op- tions that can be specified by the user are also noted in the work.

  16. Analysis of Medication Error Reports

    Energy Technology Data Exchange (ETDEWEB)

    Whitney, Paul D.; Young, Jonathan; Santell, John; Hicks, Rodney; Posse, Christian; Fecht, Barbara A.

    2004-11-15

    In medicine, as in many areas of research, technological innovation and the shift from paper based information to electronic records has created a climate of ever increasing availability of raw data. There has been, however, a corresponding lag in our abilities to analyze this overwhelming mass of data, and classic forms of statistical analysis may not allow researchers to interact with data in the most productive way. This is true in the emerging area of patient safety improvement. Traditionally, a majority of the analysis of error and incident reports has been carried out based on an approach of data comparison, and starts with a specific question which needs to be answered. Newer data analysis tools have been developed which allow the researcher to not only ask specific questions but also to “mine” data: approach an area of interest without preconceived questions, and explore the information dynamically, allowing questions to be formulated based on patterns brought up by the data itself. Since 1991, United States Pharmacopeia (USP) has been collecting data on medication errors through voluntary reporting programs. USP’s MEDMARXsm reporting program is the largest national medication error database and currently contains well over 600,000 records. Traditionally, USP has conducted an annual quantitative analysis of data derived from “pick-lists” (i.e., items selected from a list of items) without an in-depth analysis of free-text fields. In this paper, the application of text analysis and data analysis tools used by Battelle to analyze the medication error reports already analyzed in the traditional way by USP is described. New insights and findings were revealed including the value of language normalization and the distribution of error incidents by day of the week. The motivation for this effort is to gain additional insight into the nature of medication errors to support improvements in medication safety.

  17. Medication errors: definitions and classification

    Science.gov (United States)

    Aronson, Jeffrey K

    2009-01-01

    To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey–Lewis method (based on an understanding of theory and practice). A medication error is ‘a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient’. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is ‘a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient’. The converse of this, ‘balanced prescribing’ is ‘the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm’. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. A prescription error is ‘a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription’. The ‘normal features’ include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies. PMID:19594526

  18. Volcanic ash modeling with the NMMB-MONARCH-ASH model: quantification of offline modeling errors

    Science.gov (United States)

    Marti, Alejandro; Folch, Arnau

    2018-03-01

    Volcanic ash modeling systems are used to simulate the atmospheric dispersion of volcanic ash and to generate forecasts that quantify the impacts from volcanic eruptions on infrastructures, air quality, aviation, and climate. The efficiency of response and mitigation actions is directly associated with the accuracy of the volcanic ash cloud detection and modeling systems. Operational forecasts build on offline coupled modeling systems in which meteorological variables are updated at the specified coupling intervals. Despite the concerns from other communities regarding the accuracy of this strategy, the quantification of the systematic errors and shortcomings associated with the offline modeling systems has received no attention. This paper employs the NMMB-MONARCH-ASH model to quantify these errors by employing different quantitative and categorical evaluation scores. The skills of the offline coupling strategy are compared against those from an online forecast considered to be the best estimate of the true outcome. Case studies are considered for a synthetic eruption with constant eruption source parameters and for two historical events, which suitably illustrate the severe aviation disruptive effects of European (2010 Eyjafjallajökull) and South American (2011 Cordón Caulle) volcanic eruptions. Evaluation scores indicate that systematic errors due to the offline modeling are of the same order of magnitude as those associated with the source term uncertainties. In particular, traditional offline forecasts employed in operational model setups can result in significant uncertainties, failing to reproduce, in the worst cases, up to 45-70 % of the ash cloud of an online forecast. These inconsistencies are anticipated to be even more relevant in scenarios in which the meteorological conditions change rapidly in time. The outcome of this paper encourages operational groups responsible for real-time advisories for aviation to consider employing computationally

  19. The computation of equating errors in international surveys in education.

    Science.gov (United States)

    Monseur, Christian; Berezner, Alla

    2007-01-01

    Since the IEA's Third International Mathematics and Science Study, one of the major objectives of international surveys in education has been to report trends in achievement. The names of the two current IEA surveys reflect this growing interest: Trends in International Mathematics and Science Study (TIMSS) and Progress in International Reading Literacy Study (PIRLS). Similarly a central concern of the OECD's PISA is with trends in outcomes over time. To facilitate trend analyses these studies link their tests using common item equating in conjunction with item response modelling methods. IEA and PISA policies differ in terms of reporting the error associated with trends. In IEA surveys, the standard errors of the trend estimates do not include the uncertainty associated with the linking step while PISA does include a linking error component in the standard errors of trend estimates. In other words, PISA implicitly acknowledges that trend estimates partly depend on the selected common items, while the IEA's surveys do not recognise this source of error. Failing to recognise the linking error leads to an underestimation of the standard errors and thus increases the Type I error rate, thereby resulting in reporting of significant changes in achievement when in fact these are not significant. The growing interest of policy makers in trend indicators and the impact of the evaluation of educational reforms appear to be incompatible with such underestimation. However, the procedure implemented by PISA raises a few issues about the underlying assumptions for the computation of the equating error. After a brief introduction, this paper will describe the procedure PISA implemented to compute the linking error. The underlying assumptions of this procedure will then be discussed. Finally an alternative method based on replication techniques will be presented, based on a simulation study and then applied to the PISA 2000 data.

  20. Correcting quantum errors with entanglement.

    Science.gov (United States)

    Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu

    2006-10-20

    We show how entanglement shared between encoder and decoder can simplify the theory of quantum error correction. The entanglement-assisted quantum codes we describe do not require the dual-containing constraint necessary for standard quantum error-correcting codes, thus allowing us to "quantize" all of classical linear coding theory. In particular, efficient modern classical codes that attain the Shannon capacity can be made into entanglement-assisted quantum codes attaining the hashing bound (closely related to the quantum capacity). For systems without large amounts of shared entanglement, these codes can also be used as catalytic codes, in which a small amount of initial entanglement enables quantum communication.

  1. Human Error and Organizational Management

    Directory of Open Access Journals (Sweden)

    Alecxandrina DEACONU

    2009-01-01

    Full Text Available The concern for performance is a topic that raises interest in the businessenvironment but also in other areas that – even if they seem distant from thisworld – are aware of, interested in or conditioned by the economy development.As individual performance is very much influenced by the human resource, wechose to analyze in this paper the mechanisms that generate – consciously or not–human error nowadays.Moreover, the extremely tense Romanian context,where failure is rather a rule than an exception, made us investigate thephenomenon of generating a human error and the ways to diminish its effects.

  2. Preventing statistical errors in scientific journals.

    NARCIS (Netherlands)

    Nuijten, M.B.

    2016-01-01

    There is evidence for a high prevalence of statistical reporting errors in psychology and other scientific fields. These errors display a systematic preference for statistically significant results, distorting the scientific literature. There are several possible causes for this systematic error

  3. Continuous quantum error correction for non-Markovian decoherence

    International Nuclear Information System (INIS)

    Oreshkov, Ognyan; Brun, Todd A.

    2007-01-01

    We study the effect of continuous quantum error correction in the case where each qubit in a codeword is subject to a general Hamiltonian interaction with an independent bath. We first consider the scheme in the case of a trivial single-qubit code, which provides useful insights into the workings of continuous error correction and the difference between Markovian and non-Markovian decoherence. We then study the model of a bit-flip code with each qubit coupled to an independent bath qubit and subject to continuous correction, and find its solution. We show that for sufficiently large error-correction rates, the encoded state approximately follows an evolution of the type of a single decohering qubit, but with an effectively decreased coupling constant. The factor by which the coupling constant is decreased scales quadratically with the error-correction rate. This is compared to the case of Markovian noise, where the decoherence rate is effectively decreased by a factor which scales only linearly with the rate of error correction. The quadratic enhancement depends on the existence of a Zeno regime in the Hamiltonian evolution which is absent in purely Markovian dynamics. We analyze the range of validity of this result and identify two relevant time scales. Finally, we extend the result to more general codes and argue that the performance of continuous error correction will exhibit the same qualitative characteristics

  4. Consequences of leaf calibration errors on IMRT delivery

    International Nuclear Information System (INIS)

    Sastre-Padro, M; Welleweerd, J; Malinen, E; Eilertsen, K; Olsen, D R; Heide, U A van der

    2007-01-01

    IMRT treatments using multi-leaf collimators may involve a large number of segments in order to spare the organs at risk. When a large proportion of these segments are small, leaf positioning errors may become relevant and have therapeutic consequences. The performance of four head and neck IMRT treatments under eight different cases of leaf positioning errors has been studied. Systematic leaf pair offset errors in the range of ±2.0 mm were introduced, thus modifying the segment sizes of the original IMRT plans. Thirty-six films were irradiated with the original and modified segments. The dose difference and the gamma index (with 2%/2 mm criteria) were used for evaluating the discrepancies between the irradiated films. The median dose differences were linearly related to the simulated leaf pair errors. In the worst case, a 2.0 mm error generated a median dose difference of 1.5%. Following the gamma analysis, two out of the 32 modified plans were not acceptable. In conclusion, small systematic leaf bank positioning errors have a measurable impact on the delivered dose and may have consequences for the therapeutic outcome of IMRT

  5. Dissociating response conflict and error likelihood in anterior cingulate cortex.

    Science.gov (United States)

    Yeung, Nick; Nieuwenhuis, Sander

    2009-11-18

    Neuroimaging studies consistently report activity in anterior cingulate cortex (ACC) in conditions of high cognitive demand, leading to the view that ACC plays a crucial role in the control of cognitive processes. According to one prominent theory, the sensitivity of ACC to task difficulty reflects its role in monitoring for the occurrence of competition, or "conflict," between responses to signal the need for increased cognitive control. However, a contrasting theory proposes that ACC is the recipient rather than source of monitoring signals, and that ACC activity observed in relation to task demand reflects the role of this region in learning about the likelihood of errors. Response conflict and error likelihood are typically confounded, making the theories difficult to distinguish empirically. The present research therefore used detailed computational simulations to derive contrasting predictions regarding ACC activity and error rate as a function of response speed. The simulations demonstrated a clear dissociation between conflict and error likelihood: fast response trials are associated with low conflict but high error likelihood, whereas slow response trials show the opposite pattern. Using the N2 component as an index of ACC activity, an EEG study demonstrated that when conflict and error likelihood are dissociated in this way, ACC activity tracks conflict and is negatively correlated with error likelihood. These findings support the conflict-monitoring theory and suggest that, in speeded decision tasks, ACC activity reflects current task demands rather than the retrospective coding of past performance.

  6. Grammar Errors in the Writing of Iraqi English Language Learners

    Directory of Open Access Journals (Sweden)

    Yasir Bdaiwi Jasim Al-Shujairi

    2017-10-01

    Full Text Available Several studies have been conducted to investigate the grammatical errors of Iraqi postgraduates and undergraduates in their academic writing. However, few studies have focused on the writing challenges that Iraqi pre-university students face. This research aims at examining the written discourse of Iraqi high school students and the common grammatical errors they make in their writing. The study had a mixed methods design. Through convenience sampling method, 112 compositions were collected from Iraqi pre-university students. For purpose of triangulation, an interview was conducted. The data was analyzed using Corder’s (1967 error analysis model and James’ (1998 framework of grammatical errors. Furthermore, Brown’s (2000 taxonomy was adopted to classify the types of errors. The result showed that Iraqi high school students have serious problems with the usage of verb tenses, articles, and prepositions. Moreover, the most frequent types of errors were Omission and Addition. Furthermore, it was found that intralanguage was the dominant source of errors. These findings may enlighten Iraqi students on the importance of correct grammar use for writing efficacy.

  7. ERM model analysis for adaptation to hydrological model errors

    Science.gov (United States)

    Baymani-Nezhad, M.; Han, D.

    2018-05-01

    Hydrological conditions are changed continuously and these phenomenons generate errors on flood forecasting models and will lead to get unrealistic results. Therefore, to overcome these difficulties, a concept called model updating is proposed in hydrological studies. Real-time model updating is one of the challenging processes in hydrological sciences and has not been entirely solved due to lack of knowledge about the future state of the catchment under study. Basically, in terms of flood forecasting process, errors propagated from the rainfall-runoff model are enumerated as the main source of uncertainty in the forecasting model. Hence, to dominate the exciting errors, several methods have been proposed by researchers to update the rainfall-runoff models such as parameter updating, model state updating, and correction on input data. The current study focuses on investigations about the ability of rainfall-runoff model parameters to cope with three types of existing errors, timing, shape and volume as the common errors in hydrological modelling. The new lumped model, the ERM model, has been selected for this study to evaluate its parameters for its use in model updating to cope with the stated errors. Investigation about ten events proves that the ERM model parameters can be updated to cope with the errors without the need to recalibrate the model.

  8. Evaluation of soft errors rate in a commercial memory EEPROM

    International Nuclear Information System (INIS)

    Claro, Luiz H.; Silva, A.A.; Santos, Jose A.

    2011-01-01

    Soft errors are transient circuit errors caused by external radiation. When an ion intercepts a p-n region in an electronic component, the ionization produces excess charges along the track. These charges when collected can flip internal values, especially in memory cells. The problem affects not only space application but also terrestrial ones. Neutrons induced by cosmic rays and alpha particles, emitted from traces of radioactive contaminants contained in packaging and chip materials, are the predominant sources of radiation. The soft error susceptibility is different for different memory technology hence the experimental study are very important for Soft Error Rate (SER) evaluation. In this work, the methodology for accelerated tests is presented with the results for SER in a commercial electrically erasable and programmable read-only memory (EEPROM). (author)

  9. Sub-nanometer periodic nonlinearity error in absolute distance interferometers

    Science.gov (United States)

    Yang, Hongxing; Huang, Kaiqi; Hu, Pengcheng; Zhu, Pengfei; Tan, Jiubin; Fan, Zhigang

    2015-05-01

    Periodic nonlinearity which can result in error in nanometer scale has become a main problem limiting the absolute distance measurement accuracy. In order to eliminate this error, a new integrated interferometer with non-polarizing beam splitter is developed. This leads to disappearing of the frequency and/or polarization mixing. Furthermore, a strict requirement on the laser source polarization is highly reduced. By combining retro-reflector and angel prism, reference and measuring beams can be spatially separated, and therefore, their optical paths are not overlapped. So, the main cause of the periodic nonlinearity error, i.e., the frequency and/or polarization mixing and leakage of beam, is eliminated. Experimental results indicate that the periodic phase error is kept within 0.0018°.

  10. Understanding Human Error in Naval Aviation Mishaps.

    Science.gov (United States)

    Miranda, Andrew T

    2018-04-01

    To better understand the external factors that influence the performance and decisions of aviators involved in Naval aviation mishaps. Mishaps in complex activities, ranging from aviation to nuclear power operations, are often the result of interactions between multiple components within an organization. The Naval aviation mishap database contains relevant information, both in quantitative statistics and qualitative reports, that permits analysis of such interactions to identify how the working atmosphere influences aviator performance and judgment. Results from 95 severe Naval aviation mishaps that occurred from 2011 through 2016 were analyzed using Bayes' theorem probability formula. Then a content analysis was performed on a subset of relevant mishap reports. Out of the 14 latent factors analyzed, the Bayes' application identified 6 that impacted specific aspects of aviator behavior during mishaps. Technological environment, misperceptions, and mental awareness impacted basic aviation skills. The remaining 3 factors were used to inform a content analysis of the contextual information within mishap reports. Teamwork failures were the result of plan continuation aggravated by diffused responsibility. Resource limitations and risk management deficiencies impacted judgments made by squadron commanders. The application of Bayes' theorem to historical mishap data revealed the role of latent factors within Naval aviation mishaps. Teamwork failures were seen to be considerably damaging to both aviator skill and judgment. Both the methods and findings have direct application for organizations interested in understanding the relationships between external factors and human error. It presents real-world evidence to promote effective safety decisions.

  11. Price formation in electricity forward markets and the relevance of systematic forecast errors

    International Nuclear Information System (INIS)

    Redl, Christian; Haas, Reinhard; Huber, Claus; Boehm, Bernhard

    2009-01-01

    Since the liberalisation of the European electricity sector, forward and futures contracts have gained significant interest of market participants due to risk management reasons. For pricing of these contracts an important fact concerns the non-storability of electricity. In this case, according to economic theory, forward prices are related to the expected spot prices which are built on fundamental market expectations. In the following article the crucial impact parameters of forward electricity prices and the relationship between forward and future spot prices will be assessed by an empirical analysis of electricity prices at the European Energy Exchange and the Nord Pool Power Exchange. In fact, price formation in the considered markets is influenced by historic spot market prices yielding a biased forecasting power of long-term contracts. Although market and risk assessment measures of market participants and supply and demand shocks can partly explain the futures-spot bias inefficiencies in the analysed forward markets cannot be ruled out. (author)

  12. Medication errors in pediatric inpatients

    DEFF Research Database (Denmark)

    Rishoej, Rikke Mie; Almarsdóttir, Anna Birna; Christesen, Henrik Thybo

    2017-01-01

    The aim was to describe medication errors (MEs) in hospitalized children reported to the national mandatory reporting and learning system, the Danish Patient Safety Database (DPSD). MEs were extracted from DPSD from the 5-year period of 2010–2014. We included reports from public hospitals on pati...... safety in pediatric inpatients.(Table presented.)...

  13. Learner Corpora without Error Tagging

    Directory of Open Access Journals (Sweden)

    Rastelli, Stefano

    2009-01-01

    Full Text Available The article explores the possibility of adopting a form-to-function perspective when annotating learner corpora in order to get deeper insights about systematic features of interlanguage. A split between forms and functions (or categories is desirable in order to avoid the "comparative fallacy" and because – especially in basic varieties – forms may precede functions (e.g., what resembles to a "noun" might have a different function or a function may show up in unexpected forms. In the computer-aided error analysis tradition, all items produced by learners are traced to a grid of error tags which is based on the categories of the target language. Differently, we believe it is possible to record and make retrievable both words and sequence of characters independently from their functional-grammatical label in the target language. For this purpose at the University of Pavia we adapted a probabilistic POS tagger designed for L1 on L2 data. Despite the criticism that this operation can raise, we found that it is better to work with "virtual categories" rather than with errors. The article outlines the theoretical background of the project and shows some examples in which some potential of SLA-oriented (non error-based tagging will be possibly made clearer.

  14. Theory of Test Translation Error

    Science.gov (United States)

    Solano-Flores, Guillermo; Backhoff, Eduardo; Contreras-Nino, Luis Angel

    2009-01-01

    In this article, we present a theory of test translation whose intent is to provide the conceptual foundation for effective, systematic work in the process of test translation and test translation review. According to the theory, translation error is multidimensional; it is not simply the consequence of defective translation but an inevitable fact…

  15. and Correlated Error-Regressor

    African Journals Online (AJOL)

    Nekky Umera

    in queuing theory and econometrics, where the usual assumption of independent error terms may not be plausible in most cases. Also, when using time-series data on a number of micro-economic units, such as households and service oriented channels, where the stochastic disturbance terms in part reflect variables which ...

  16. Rank error-correcting pairs

    DEFF Research Database (Denmark)

    Martinez Peñas, Umberto; Pellikaan, Ruud

    2017-01-01

    Error-correcting pairs were introduced as a general method of decoding linear codes with respect to the Hamming metric using coordinatewise products of vectors, and are used for many well-known families of codes. In this paper, we define new types of vector products, extending the coordinatewise ...

  17. The Errors of Our Ways

    Science.gov (United States)

    Kane, Michael

    2011-01-01

    Errors don't exist in our data, but they serve a vital function. Reality is complicated, but our models need to be simple in order to be manageable. We assume that attributes are invariant over some conditions of observation, and once we do that we need some way of accounting for the variability in observed scores over these conditions of…

  18. Cascade Error Projection Learning Algorithm

    Science.gov (United States)

    Duong, T. A.; Stubberud, A. R.; Daud, T.

    1995-01-01

    A detailed mathematical analysis is presented for a new learning algorithm termed cascade error projection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters.

  19. Unidentified point sources in the IRAS minisurvey

    Science.gov (United States)

    Houck, J. R.; Soifer, B. T.; Neugebauer, G.; Beichman, C. A.; Aumann, H. H.; Clegg, P. E.; Gillett, F. C.; Habing, H. J.; Hauser, M. G.; Low, F. J.

    1984-01-01

    Nine bright, point-like 60 micron sources have been selected from the sample of 8709 sources in the IRAS minisurvey. These sources have no counterparts in a variety of catalogs of nonstellar objects. Four objects have no visible counterparts, while five have faint stellar objects visible in the error ellipse. These sources do not resemble objects previously known to be bright infrared sources.

  20. #2 - An Empirical Assessment of Exposure Measurement Error ...

    Science.gov (United States)

    Background• Differing degrees of exposure error acrosspollutants• Previous focus on quantifying and accounting forexposure error in single-pollutant models• Examine exposure errors for multiple pollutantsand provide insights on the potential for bias andattenuation of effect estimates in single and bipollutantepidemiological models The National Exposure Research Laboratory (NERL) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in support of EPA mission to protect human health and the environment. HEASD research program supports Goal 1 (Clean Air) and Goal 4 (Healthy People) of EPA strategic plan. More specifically, our division conducts research to characterize the movement of pollutants from the source to contact with humans. Our multidisciplinary research program produces Methods, Measurements, and Models to identify relationships between and characterize processes that link source emissions, environmental concentrations, human exposures, and target-tissue dose. The impact of these tools is improved regulatory programs and policies for EPA.

  1. 41 CFR 101-26.310 - Ordering errors.

    Science.gov (United States)

    2010-07-01

    ... 41 Public Contracts and Property Management 2 2010-07-01 2010-07-01 true Ordering errors. 101-26.310 Section 101-26.310 Public Contracts and Property Management Federal Property Management Regulations System FEDERAL PROPERTY MANAGEMENT REGULATIONS SUPPLY AND PROCUREMENT 26-PROCUREMENT SOURCES AND...

  2. GY SAMPLING THEORY IN ENVIRONMENTAL STUDIES 2: SUBSAMPLING ERROR MEASUREMENTS

    Science.gov (United States)

    Sampling can be a significant source of error in the measurement process. The characterization and cleanup of hazardous waste sites require data that meet site-specific levels of acceptable quality if scientifically supportable decisions are to be made. In support of this effort,...

  3. Morphological Errors in Spanish Second Language Learners and Heritage Speakers

    Science.gov (United States)

    Montrul, Silvina

    2011-01-01

    Morphological variability and the source of these errors have been intensely debated in SLA. A recurrent finding is that postpuberty second language (L2) learners often omit or use the wrong affix for nominal and verbal inflections in oral production but less so in written tasks. According to the missing surface inflection hypothesis, L2 learners…

  4. Refractive errors and school performance in Brazzaville, Congo ...

    African Journals Online (AJOL)

    Background: Wearing glasses before ten years is becoming more common in developed countries. In black Africa, for cultural or irrational reasons, this attitude remains exceptional. This situation is a source of amblyopia and learning difficulties. Objective: To determine the role of refractive errors in school performance in ...

  5. Identifying afterloading PDR and HDR brachytherapy errors using real-time fiber-coupled Al2O3:C dosimetry and a novel statistical error decision criterion

    DEFF Research Database (Denmark)

    Kertzscher, Gustavo; Andersen, Claus Erik; Siebert, Frank-André

    2011-01-01

    treatment errors, including interchanged pairs of afterloader guide tubes and 2–20mm source displacements, were monitored using a real-time fiber-coupled carbon doped aluminum oxide (Al2O3:C) crystal dosimeter that was positioned in the reconstructed tumor region. The error detection capacity was evaluated...

  6. Effects of human errors on the determination of surveillance test interval

    International Nuclear Information System (INIS)

    Chung, Dae Wook; Koo, Bon Hyun

    1990-01-01

    This paper incorporates the effects of human error relevant to the periodic test on the unavailability of the safety system as well as the component unavailability. Two types of possible human error during the test are considered. One is the possibility that a good safety system is inadvertently left in a bad state after the test (Type A human error) and the other is the possibility that bad safety system is undetected upon the test (Type B human error). An event tree model is developed for the steady-state unavailability of safety system to determine the effects of human errors on the component unavailability and the test interval. We perform the reliability analysis of safety injection system (SIS) by applying aforementioned two types of human error to safety injection pumps. Results of various sensitivity analyses show that; 1) the appropriate test interval decreases and steady-state unavailability increases as the probabilities of both types of human errors increase, and they are far more sensitive to Type A human error than Type B and 2) the SIS unavailability increases slightly as the probability of Type B human error increases, and significantly as the probability of Type A human error increases. Therefore, to avoid underestimation, the effects of human error should be incorporated in the system reliability analysis which aims at the relaxations of the surveillance test intervals, and Type A human error has more important effect on the unavailability and surveillance test interval

  7. Identifying medication error chains from critical incident reports: a new analytic approach.

    Science.gov (United States)

    Huckels-Baumgart, Saskia; Manser, Tanja

    2014-10-01

    Research into the distribution of medication errors usually focuses on isolated stages within the medication use process. Our study aimed to provide a novel process-oriented approach to medication incident analysis focusing on medication error chains. Our study was conducted across a 900-bed teaching hospital in Switzerland. All reported 1,591 medication errors 2009-2012 were categorized using the Medication Error Index NCC MERP and the WHO Classification for Patient Safety Methodology. In order to identify medication error chains, each reported medication incident was allocated to the relevant stage of the hospital medication use process. Only 25.8% of the reported medication errors were detected before they propagated through the medication use process. The majority of medication errors (74.2%) formed an error chain encompassing two or more stages. The most frequent error chain comprised preparation up to and including medication administration (45.2%). "Non-consideration of documentation/prescribing" during the drug preparation was the most frequent contributor for "wrong dose" during the administration of medication. Medication error chains provide important insights for detecting and stopping medication errors before they reach the patient. Existing and new safety barriers need to be extended to interrupt error chains and to improve patient safety. © 2014, The American College of Clinical Pharmacology.

  8. Heuristic thinking: interdisciplinary perspectives on medical error

    Directory of Open Access Journals (Sweden)

    Annegret F. Hannawa

    2013-12-01

    Switzerland to stimulate such interdisciplinary dialogue. International scholars from eight disciplines and 17 countries attended the congress to discuss interdisciplinary ideas and perspectives for advancing safer care. The team of invited COME experts collaborated in compiling this issue of the Journal of Public Health Research entitled Interdisciplinary perspectives on medical error. This particular issue introduces relevant North American and European theorizing and research on preventable adverse events. The caliber of scientists who have contributed to this issue is humbling. But rather than naming their affiliations and summarizing their individual manuscripts here, it is more important to reflect on the contribution of this special issue as a whole. Particularly, I would like to raise two important take-home messages that the articles yield: i What new insights can be derived from the papers collected in this issue? ii What are the central challenges implied for future research on medical error?

  9. Error analysis of pupils in calculating with fractions

    OpenAIRE

    Uranič, Petra

    2016-01-01

    In this thesis I examine the correlation between the frequency of errors that seventh grade pupils make in their calculations with fractions and their level of understanding of fractions. Fractions are a relevant and demanding theme in the mathematics curriculum. Although we use fractions on a daily basis, pupils find learning fractions to be very difficult. They generally do not struggle with the concept of fractions itself, but they frequently have problems with mathematical operations ...

  10. Medical error, malpractice and complications: a moral geography.

    Science.gov (United States)

    Zientek, David M

    2010-06-01

    This essay reviews and defines avoidable medical error, malpractice and complication. The relevant ethical principles pertaining to unanticipated medical outcomes are identified. In light of these principles I critically review the moral culpability of the agents in each circumstance and the resulting obligations to patients, their families, and the health care system in general. While I touch on some legal implications, a full discussion of legal obligations and liability issues is beyond the scope of this paper.

  11. Estimation of subcriticality of TCA using 'indirect estimation method for calculation error'

    International Nuclear Information System (INIS)

    Naito, Yoshitaka; Yamamoto, Toshihiro; Arakawa, Takuya; Sakurai, Kiyoshi

    1996-01-01

    To estimate the subcriticality of neutron multiplication factor in a fissile system, 'Indirect Estimation Method for Calculation Error' is proposed. This method obtains the calculational error of neutron multiplication factor by correlating measured values with the corresponding calculated ones. This method was applied to the source multiplication and to the pulse neutron experiments conducted at TCA, and the calculation error of MCNP 4A was estimated. In the source multiplication method, the deviation of measured neutron count rate distributions from the calculated ones estimates the accuracy of calculated k eff . In the pulse neutron method, the calculation errors of prompt neutron decay constants give the accuracy of the calculated k eff . (author)

  12. An approach to improving the structure of error-handling code in the linux kernel

    DEFF Research Database (Denmark)

    Saha, Suman; Lawall, Julia; Muller, Gilles

    2011-01-01

    The C language does not provide any abstractions for exception handling or other forms of error handling, leaving programmers to devise their own conventions for detecting and handling errors. The Linux coding style guidelines suggest placing error handling code at the end of each function, where...... an automatic program transformation that transforms error-handling code into this style. We have applied our transformation to the Linux 2.6.34 kernel source code, on which it reorganizes the error handling code of over 1800 functions, in about 25 minutes....

  13. Impact of error fields on plasma identification in ITER

    Energy Technology Data Exchange (ETDEWEB)

    Martone, R., E-mail: Raffaele.Martone@unina2.it [Ass. EURATOM/ENEA/CREATE, Seconda Università di Napoli, Via Roma 29, Aversa (CE) (Italy); Appel, L. [EURATOM/CCFE Fusion Association, Culham Science Centre, Abingdon (United Kingdom); Chiariello, A.G.; Formisano, A.; Mattei, M. [Ass. EURATOM/ENEA/CREATE, Seconda Università di Napoli, Via Roma 29, Aversa (CE) (Italy); Pironti, A. [Ass. EURATOM/ENEA/CREATE, Università degli Studi di Napoli “Federico II”, Via Claudio 25, Napoli (Italy)

    2013-10-15

    Highlights: ► The paper deals with the effect on plasma identification of error fields generated by field coils manufacturing and assembly errors. ► EFIT++ is used to identify plasma gaps when poloidal field coils and central solenoid coils are deformed, and the gaps sensitivity with respect to such errors is analyzed. ► Some examples of reconstruction errors in the presence of deformations are reported. -- Abstract: The active control of plasma discharges in present Tokamak devices must be prompt and accurate to guarantee expected performance. As a consequence, the identification step, calculating plasma parameters from diagnostics, should provide in a very short time reliable estimates of the relevant quantities, such as plasma centroid position, plasma-wall distances at given points called gaps, and other geometrical parameters as elongation and triangularity. To achieve the desired response promptness, a number of simplifying assumptions are usually made in the identification algorithms. Among those clearly affecting the quality of the plasma parameters reconstruction, one of the most relevant is the precise knowledge of the magnetic field produced by active coils. Since uncertainties in their manufacturing and assembly process may cause misalignments between the actual and expected geometry and position of magnets, an analysis on the effect of possible wrong information about magnets on the plasma shape identification is documented in this paper.

  14. Discretization vs. Rounding Error in Euler's Method

    Science.gov (United States)

    Borges, Carlos F.

    2011-01-01

    Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…

  15. Total Survey Error for Longitudinal Surveys

    NARCIS (Netherlands)

    Lynn, Peter; Lugtig, P.J.

    2016-01-01

    This article describes the application of the total survey error paradigm to longitudinal surveys. Several aspects of survey error, and of the interactions between different types of error, are distinct in the longitudinal survey context. Furthermore, error trade-off decisions in survey design and

  16. Sources for charged particles

    International Nuclear Information System (INIS)

    Arianer, J.

    1997-01-01

    This document is a basic course on charged particle sources for post-graduate students and thematic schools on large facilities and accelerator physics. A simple but precise description of the creation and the emission of charged particles is presented. This course relies on every year upgraded reference documents. Following relevant topics are considered: electronic emission processes, technological and practical considerations on electron guns, positron sources, production of neutral atoms, ionization, plasma and discharge, different types of positive and negative ion sources, polarized particle sources, materials for the construction of ion sources, low energy beam production and transport. (N.T.)

  17. Inborn errors of metabolism: a clinical overview

    Directory of Open Access Journals (Sweden)

    Ana Maria Martins

    1999-11-01

    Full Text Available CONTEXT: Inborn errors of metabolism cause hereditary metabolic diseases (HMD and classically they result from the lack of activity of one or more specific enzymes or defects in the transportation of proteins. OBJECTIVES: A clinical review of inborn errors of metabolism (IEM to give a practical approach to the physician with figures and tables to help in understanding the more common groups of these disorders. DATA SOURCE: A systematic review of the clinical and biochemical basis of IEM in the literature, especially considering the last ten years and a classic textbook (Scriver CR et al, 1995. SELECTION OF STUDIES: A selection of 108 references about IEM by experts in the subject was made. Clinical cases are presented with the peculiar symptoms of various diseases. DATA SYNTHESIS: IEM are frequently misdiagnosed because the general practitioner, or pediatrician in the neonatal or intensive care units, does not think about this diagnosis until the more common cause have been ruled out. This review includes inheritance patterns and clinical and laboratory findings of the more common IEM diseases within a clinical classification that give a general idea about these disorders. A summary of treatment types for metabolic inherited diseases is given. CONCLUSIONS: IEM are not rare diseases, unlike previous thinking about them, and IEM patients form part of the clientele in emergency rooms at general hospitals and in intensive care units. They are also to be found in neurological, pediatric, obstetrics, surgical and psychiatric clinics seeking diagnoses, prognoses and therapeutic or supportive treatment.

  18. Varying coefficients model with measurement error.

    Science.gov (United States)

    Li, Liang; Greene, Tom

    2008-06-01

    We propose a semiparametric partially varying coefficient model to study the relationship between serum creatinine concentration and the glomerular filtration rate (GFR) among kidney donors and patients with chronic kidney disease. A regression model is used to relate serum creatinine to GFR and demographic factors in which coefficient of GFR is expressed as a function of age to allow its effect to be age dependent. GFR measurements obtained from the clearance of a radioactively labeled isotope are assumed to be a surrogate for the true GFR, with the relationship between measured and true GFR expressed using an additive error model. We use locally corrected score equations to estimate parameters and coefficient functions, and propose an expected generalized cross-validation (EGCV) method to select the kernel bandwidth. The performance of the proposed methods, which avoid distributional assumptions on the true GFR and residuals, is investigated by simulation. Accounting for measurement error using the proposed model reduced apparent inconsistencies in the relationship between serum creatinine and GFR among different clinical data sets derived from kidney donor and chronic kidney disease source populations.

  19. Negligence, genuine error, and litigation

    Science.gov (United States)

    Sohn, David H

    2013-01-01

    Not all medical injuries are the result of negligence. In fact, most medical injuries are the result either of the inherent risk in the practice of medicine, or due to system errors, which cannot be prevented simply through fear of disciplinary action. This paper will discuss the differences between adverse events, negligence, and system errors; the current medical malpractice tort system in the United States; and review current and future solutions, including medical malpractice reform, alternative dispute resolution, health courts, and no-fault compensation systems. The current political environment favors investigation of non-cap tort reform remedies; investment into more rational oversight systems, such as health courts or no-fault systems may reap both quantitative and qualitative benefits for a less costly and safer health system. PMID:23426783

  20. Robot learning and error correction

    Science.gov (United States)

    Friedman, L.

    1977-01-01

    A model of robot learning is described that associates previously unknown perceptions with the sensed known consequences of robot actions. For these actions, both the categories of outcomes and the corresponding sensory patterns are incorporated in a knowledge base by the system designer. Thus the robot is able to predict the outcome of an action and compare the expectation with the experience. New knowledge about what to expect in the world may then be incorporated by the robot in a pre-existing structure whether it detects accordance or discrepancy between a predicted consequence and experience. Errors committed during plan execution are detected by the same type of comparison process and learning may be applied to avoiding the errors.